Test Report: Docker_Linux_crio_arm64 17764

                    
                      47aff3550d8f737faf92680522e584556adb8789:2023-12-12:32246
                    
                

Test fail (12/314)

x
+
TestAddons/parallel/Ingress (167.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-513852 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-513852 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-513852 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a29ed311-974a-4330-9d5d-9905b8b2a957] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a29ed311-974a-4330-9d5d-9905b8b2a957] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.014587774s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-513852 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-513852 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.493765198s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-513852 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-513852 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.075929789s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-513852 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-513852 addons disable ingress-dns --alsologtostderr -v=1: (1.268139959s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-513852 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-513852 addons disable ingress --alsologtostderr -v=1: (7.778435145s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-513852
helpers_test.go:235: (dbg) docker inspect addons-513852:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef",
	        "Created": "2023-12-12T00:12:12.845410053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1118408,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:12:13.167413464Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef/hosts",
	        "LogPath": "/var/lib/docker/containers/ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef/ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef-json.log",
	        "Name": "/addons-513852",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-513852:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-513852",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a1c74c3ba85f1c0bb9c17328adca6839f763072fd13b4edd025f8ad800a85c44-init/diff:/var/lib/docker/overlay2/c2a4fdcea722509eecd2151e38f63a7bf15f9db138183afe352dd4d4bae4600f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a1c74c3ba85f1c0bb9c17328adca6839f763072fd13b4edd025f8ad800a85c44/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a1c74c3ba85f1c0bb9c17328adca6839f763072fd13b4edd025f8ad800a85c44/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a1c74c3ba85f1c0bb9c17328adca6839f763072fd13b4edd025f8ad800a85c44/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-513852",
	                "Source": "/var/lib/docker/volumes/addons-513852/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-513852",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-513852",
	                "name.minikube.sigs.k8s.io": "addons-513852",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f065a0147216bc31a78d162befc74d6c0ea3d9202fa33ad349a1269cf8c8a082",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34010"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34009"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34006"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34008"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34007"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f065a0147216",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-513852": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ce2d53620b64",
	                        "addons-513852"
	                    ],
	                    "NetworkID": "5d39f67815fb8bc7c9d433babd97a1dbd454bd30553390a89917be64d14a1586",
	                    "EndpointID": "2e5d6648e00b7aa54def1f7f1bb9a7eb9650373e02bc52823e8d931ae9c2b24c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-513852 -n addons-513852
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-513852 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-513852 logs -n 25: (1.598198843s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-661903   | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |                     |
	|         | -p download-only-661903              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-661903   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | -p download-only-661903              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-661903   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | -p download-only-661903              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| delete  | -p download-only-661903              | download-only-661903   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| delete  | -p download-only-661903              | download-only-661903   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| start   | --download-only -p                   | download-docker-765600 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | download-docker-765600               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-765600            | download-docker-765600 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| start   | --download-only -p                   | binary-mirror-675945   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | binary-mirror-675945                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41867               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-675945              | binary-mirror-675945   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| addons  | enable dashboard -p                  | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | addons-513852                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | addons-513852                        |                        |         |         |                     |                     |
	| start   | -p addons-513852 --wait=true         | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:14 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | -p addons-513852                     |                        |         |         |                     |                     |
	| addons  | addons-513852 addons                 | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-513852 ip                     | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	| addons  | addons-513852 addons disable         | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | addons-513852                        |                        |         |         |                     |                     |
	| ssh     | addons-513852 ssh curl -s            | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | addons-513852                        |                        |         |         |                     |                     |
	| ip      | addons-513852 ip                     | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:17 UTC | 12 Dec 23 00:17 UTC |
	| addons  | addons-513852 addons disable         | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:17 UTC | 12 Dec 23 00:17 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-513852 addons disable         | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:17 UTC | 12 Dec 23 00:17 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:11:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:49.980348 1117956 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:11:49.980505 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:49.980515 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:11:49.980522 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:49.980797 1117956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:11:49.981227 1117956 out.go:303] Setting JSON to false
	I1212 00:11:49.982106 1117956 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24856,"bootTime":1702315054,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:11:49.982185 1117956 start.go:138] virtualization:  
	I1212 00:11:49.984397 1117956 out.go:177] * [addons-513852] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:11:49.987330 1117956 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:11:49.989320 1117956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:49.987465 1117956 notify.go:220] Checking for updates...
	I1212 00:11:49.992478 1117956 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:11:49.994441 1117956 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:11:49.996319 1117956 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:11:49.998290 1117956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:11:50.003522 1117956 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:11:50.029057 1117956 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:11:50.029180 1117956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:50.117727 1117956 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-12 00:11:50.108281719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:50.117855 1117956 docker.go:295] overlay module found
	I1212 00:11:50.121000 1117956 out.go:177] * Using the docker driver based on user configuration
	I1212 00:11:50.123235 1117956 start.go:298] selected driver: docker
	I1212 00:11:50.123260 1117956 start.go:902] validating driver "docker" against <nil>
	I1212 00:11:50.123282 1117956 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:11:50.123895 1117956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:50.189471 1117956 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-12 00:11:50.179958072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:50.189618 1117956 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:11:50.189844 1117956 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:11:50.192110 1117956 out.go:177] * Using Docker driver with root privileges
	I1212 00:11:50.193808 1117956 cni.go:84] Creating CNI manager for ""
	I1212 00:11:50.193832 1117956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:11:50.193844 1117956 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:11:50.193858 1117956 start_flags.go:323] config:
	{Name:addons-513852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-513852 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:11:50.196055 1117956 out.go:177] * Starting control plane node addons-513852 in cluster addons-513852
	I1212 00:11:50.197869 1117956 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 00:11:50.199533 1117956 out.go:177] * Pulling base image ...
	I1212 00:11:50.201391 1117956 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:11:50.201455 1117956 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1212 00:11:50.201467 1117956 cache.go:56] Caching tarball of preloaded images
	I1212 00:11:50.201487 1117956 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:11:50.201562 1117956 preload.go:174] Found /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:11:50.201572 1117956 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 00:11:50.201918 1117956 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/config.json ...
	I1212 00:11:50.201947 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/config.json: {Name:mk7a236300fb3ff19195b124fc742b2f1a01fa4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:50.218590 1117956 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:11:50.218695 1117956 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory
	I1212 00:11:50.218729 1117956 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory, skipping pull
	I1212 00:11:50.218737 1117956 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in cache, skipping pull
	I1212 00:11:50.218744 1117956 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 as a tarball
	I1212 00:11:50.218750 1117956 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 from local cache
	I1212 00:12:05.913714 1117956 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 from cached tarball
	I1212 00:12:05.913751 1117956 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:12:05.913816 1117956 start.go:365] acquiring machines lock for addons-513852: {Name:mk7c3507316ea70dea507396c4d038034300e987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:12:05.914522 1117956 start.go:369] acquired machines lock for "addons-513852" in 683.032µs
	I1212 00:12:05.914556 1117956 start.go:93] Provisioning new machine with config: &{Name:addons-513852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-513852 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:12:05.914649 1117956 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:12:05.916943 1117956 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1212 00:12:05.917189 1117956 start.go:159] libmachine.API.Create for "addons-513852" (driver="docker")
	I1212 00:12:05.917218 1117956 client.go:168] LocalClient.Create starting
	I1212 00:12:05.917346 1117956 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem
	I1212 00:12:06.434649 1117956 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem
	I1212 00:12:06.615409 1117956 cli_runner.go:164] Run: docker network inspect addons-513852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:12:06.631768 1117956 cli_runner.go:211] docker network inspect addons-513852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:12:06.631859 1117956 network_create.go:281] running [docker network inspect addons-513852] to gather additional debugging logs...
	I1212 00:12:06.631880 1117956 cli_runner.go:164] Run: docker network inspect addons-513852
	W1212 00:12:06.649038 1117956 cli_runner.go:211] docker network inspect addons-513852 returned with exit code 1
	I1212 00:12:06.649071 1117956 network_create.go:284] error running [docker network inspect addons-513852]: docker network inspect addons-513852: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-513852 not found
	I1212 00:12:06.649083 1117956 network_create.go:286] output of [docker network inspect addons-513852]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-513852 not found
	
	** /stderr **
	I1212 00:12:06.649195 1117956 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:12:06.666526 1117956 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024fccb0}
	I1212 00:12:06.666562 1117956 network_create.go:124] attempt to create docker network addons-513852 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 00:12:06.666624 1117956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-513852 addons-513852
	I1212 00:12:06.736540 1117956 network_create.go:108] docker network addons-513852 192.168.49.0/24 created
	I1212 00:12:06.736572 1117956 kic.go:121] calculated static IP "192.168.49.2" for the "addons-513852" container
	I1212 00:12:06.736641 1117956 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:12:06.752938 1117956 cli_runner.go:164] Run: docker volume create addons-513852 --label name.minikube.sigs.k8s.io=addons-513852 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:12:06.770605 1117956 oci.go:103] Successfully created a docker volume addons-513852
	I1212 00:12:06.770689 1117956 cli_runner.go:164] Run: docker run --rm --name addons-513852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-513852 --entrypoint /usr/bin/test -v addons-513852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib
	I1212 00:12:08.567386 1117956 cli_runner.go:217] Completed: docker run --rm --name addons-513852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-513852 --entrypoint /usr/bin/test -v addons-513852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib: (1.79663978s)
	I1212 00:12:08.567417 1117956 oci.go:107] Successfully prepared a docker volume addons-513852
	I1212 00:12:08.567450 1117956 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:12:08.567475 1117956 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:12:08.567548 1117956 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-513852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:12:12.755902 1117956 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-513852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir: (4.188315546s)
	I1212 00:12:12.755932 1117956 kic.go:203] duration metric: took 4.188461 seconds to extract preloaded images to volume
	W1212 00:12:12.756070 1117956 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:12:12.756212 1117956 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:12:12.824907 1117956 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-513852 --name addons-513852 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-513852 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-513852 --network addons-513852 --ip 192.168.49.2 --volume addons-513852:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 00:12:13.179806 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Running}}
	I1212 00:12:13.200941 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:13.229388 1117956 cli_runner.go:164] Run: docker exec addons-513852 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:12:13.309684 1117956 oci.go:144] the created container "addons-513852" has a running status.
	I1212 00:12:13.309719 1117956 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa...
	I1212 00:12:13.816261 1117956 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:12:13.847239 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:13.874214 1117956 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:12:13.874233 1117956 kic_runner.go:114] Args: [docker exec --privileged addons-513852 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:12:13.947728 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:13.975467 1117956 machine.go:88] provisioning docker machine ...
	I1212 00:12:13.975496 1117956 ubuntu.go:169] provisioning hostname "addons-513852"
	I1212 00:12:13.975561 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:14.000966 1117956 main.go:141] libmachine: Using SSH client type: native
	I1212 00:12:14.001503 1117956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34010 <nil> <nil>}
	I1212 00:12:14.001526 1117956 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-513852 && echo "addons-513852" | sudo tee /etc/hostname
	I1212 00:12:14.207917 1117956 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-513852
	
	I1212 00:12:14.207993 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:14.234038 1117956 main.go:141] libmachine: Using SSH client type: native
	I1212 00:12:14.234444 1117956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34010 <nil> <nil>}
	I1212 00:12:14.234466 1117956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-513852' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-513852/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-513852' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:12:14.382990 1117956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:12:14.383029 1117956 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 00:12:14.383048 1117956 ubuntu.go:177] setting up certificates
	I1212 00:12:14.383056 1117956 provision.go:83] configureAuth start
	I1212 00:12:14.383123 1117956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-513852
	I1212 00:12:14.408999 1117956 provision.go:138] copyHostCerts
	I1212 00:12:14.409069 1117956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 00:12:14.409207 1117956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 00:12:14.409407 1117956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 00:12:14.409485 1117956 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.addons-513852 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-513852]
	I1212 00:12:14.645539 1117956 provision.go:172] copyRemoteCerts
	I1212 00:12:14.645619 1117956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:12:14.645665 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:14.663953 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:14.768200 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:12:14.798848 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 00:12:14.826637 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:12:14.855015 1117956 provision.go:86] duration metric: configureAuth took 471.944905ms
	I1212 00:12:14.855051 1117956 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:12:14.855241 1117956 config.go:182] Loaded profile config "addons-513852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:12:14.855355 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:14.873699 1117956 main.go:141] libmachine: Using SSH client type: native
	I1212 00:12:14.874123 1117956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34010 <nil> <nil>}
	I1212 00:12:14.874153 1117956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:12:15.141653 1117956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:12:15.141736 1117956 machine.go:91] provisioned docker machine in 1.166248968s
	I1212 00:12:15.141760 1117956 client.go:171] LocalClient.Create took 9.224530447s
	I1212 00:12:15.141797 1117956 start.go:167] duration metric: libmachine.API.Create for "addons-513852" took 9.224606777s
	I1212 00:12:15.141806 1117956 start.go:300] post-start starting for "addons-513852" (driver="docker")
	I1212 00:12:15.141816 1117956 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:12:15.141896 1117956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:12:15.141944 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:15.161093 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:15.264006 1117956 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:12:15.268211 1117956 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:12:15.268247 1117956 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:12:15.268262 1117956 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:12:15.268269 1117956 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:12:15.268278 1117956 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 00:12:15.268341 1117956 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 00:12:15.268371 1117956 start.go:303] post-start completed in 126.559429ms
	I1212 00:12:15.268691 1117956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-513852
	I1212 00:12:15.286818 1117956 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/config.json ...
	I1212 00:12:15.287102 1117956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:12:15.287156 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:15.304557 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:15.399081 1117956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:12:15.404353 1117956 start.go:128] duration metric: createHost completed in 9.489689455s
	I1212 00:12:15.404378 1117956 start.go:83] releasing machines lock for "addons-513852", held for 9.489840737s
	I1212 00:12:15.404441 1117956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-513852
	I1212 00:12:15.421683 1117956 ssh_runner.go:195] Run: cat /version.json
	I1212 00:12:15.421739 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:15.421808 1117956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:12:15.421863 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:15.448504 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:15.449135 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:15.674311 1117956 ssh_runner.go:195] Run: systemctl --version
	I1212 00:12:15.679840 1117956 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:12:15.824649 1117956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:12:15.830080 1117956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:12:15.852154 1117956 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:12:15.852231 1117956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:12:15.892011 1117956 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 00:12:15.892032 1117956 start.go:475] detecting cgroup driver to use...
	I1212 00:12:15.892062 1117956 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:12:15.892116 1117956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:12:15.909782 1117956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:12:15.923668 1117956 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:12:15.923785 1117956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:12:15.939423 1117956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:12:15.956047 1117956 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:12:16.056533 1117956 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:12:16.159154 1117956 docker.go:219] disabling docker service ...
	I1212 00:12:16.159230 1117956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:12:16.180291 1117956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:12:16.193831 1117956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:12:16.289274 1117956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:12:16.403568 1117956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:12:16.417192 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:12:16.436268 1117956 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 00:12:16.436360 1117956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:12:16.448285 1117956 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:12:16.448368 1117956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:12:16.460251 1117956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:12:16.472053 1117956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:12:16.483999 1117956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:12:16.494708 1117956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:12:16.504753 1117956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:12:16.515203 1117956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:12:16.609839 1117956 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:12:16.736465 1117956 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:12:16.736548 1117956 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:12:16.741081 1117956 start.go:543] Will wait 60s for crictl version
	I1212 00:12:16.741186 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:12:16.745743 1117956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:12:16.790478 1117956 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 00:12:16.790580 1117956 ssh_runner.go:195] Run: crio --version
	I1212 00:12:16.832748 1117956 ssh_runner.go:195] Run: crio --version
	I1212 00:12:16.877873 1117956 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1212 00:12:16.879736 1117956 cli_runner.go:164] Run: docker network inspect addons-513852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:12:16.897288 1117956 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:12:16.901669 1117956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:12:16.914680 1117956 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:12:16.914747 1117956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:12:16.982602 1117956 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 00:12:16.982624 1117956 crio.go:415] Images already preloaded, skipping extraction
	I1212 00:12:16.982680 1117956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:12:17.029868 1117956 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 00:12:17.029891 1117956 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:12:17.029970 1117956 ssh_runner.go:195] Run: crio config
	I1212 00:12:17.088349 1117956 cni.go:84] Creating CNI manager for ""
	I1212 00:12:17.088373 1117956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:12:17.088404 1117956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:12:17.088428 1117956 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-513852 NodeName:addons-513852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:12:17.088570 1117956 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-513852"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:12:17.088649 1117956 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-513852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-513852 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 00:12:17.088718 1117956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:12:17.099406 1117956 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:12:17.099488 1117956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:12:17.110021 1117956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1212 00:12:17.130590 1117956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:12:17.151380 1117956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1212 00:12:17.172094 1117956 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:12:17.176367 1117956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:12:17.189616 1117956 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852 for IP: 192.168.49.2
	I1212 00:12:17.189651 1117956 certs.go:190] acquiring lock for shared ca certs: {Name:mk50788b4819ee46b65351495e43cdf246a6ddce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:17.189813 1117956 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key
	I1212 00:12:17.471046 1117956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt ...
	I1212 00:12:17.471077 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt: {Name:mk63f7231b362eb36ee624ca1d988a5c0eeb54ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:17.471271 1117956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key ...
	I1212 00:12:17.471284 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key: {Name:mk001a7dec35b6cd75317cfa0518572d810733b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:17.472052 1117956 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key
	I1212 00:12:18.072946 1117956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt ...
	I1212 00:12:18.072985 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt: {Name:mk5adb4c4a83191ec01fbd158f8e2301c5b4e380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:18.073187 1117956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key ...
	I1212 00:12:18.073202 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key: {Name:mk83486da479f56678dc25ea9891063a949213c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:18.073359 1117956 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.key
	I1212 00:12:18.073385 1117956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt with IP's: []
	I1212 00:12:18.203736 1117956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt ...
	I1212 00:12:18.203766 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: {Name:mk2ddad058277b67b414650caa9775d45cf301f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:18.203953 1117956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.key ...
	I1212 00:12:18.203978 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.key: {Name:mkc82ca82fb5fe9dc3da535893414833cbeb9830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:18.204083 1117956 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key.dd3b5fb2
	I1212 00:12:18.204103 1117956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 00:12:19.377813 1117956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt.dd3b5fb2 ...
	I1212 00:12:19.377847 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt.dd3b5fb2: {Name:mke601a62b20dc2e283b96952577fc54ee9e8063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:19.378033 1117956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key.dd3b5fb2 ...
	I1212 00:12:19.378047 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key.dd3b5fb2: {Name:mk1924c0862ca7e851aef86a9d35758f0682eae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:19.378132 1117956 certs.go:337] copying /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt
	I1212 00:12:19.378236 1117956 certs.go:341] copying /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key
	I1212 00:12:19.378288 1117956 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.key
	I1212 00:12:19.378314 1117956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.crt with IP's: []
	I1212 00:12:19.582042 1117956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.crt ...
	I1212 00:12:19.582073 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.crt: {Name:mk3734d85f87c17800a9550539cf823d1b1562fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:19.582279 1117956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.key ...
	I1212 00:12:19.582293 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.key: {Name:mk4265909e2109577a4034bebd2d8e7075db6fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:19.582496 1117956 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:12:19.582548 1117956 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:12:19.582578 1117956 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:12:19.582609 1117956 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem (1679 bytes)
	I1212 00:12:19.583214 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:12:19.611890 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:12:19.639499 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:12:19.667412 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:12:19.695169 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:12:19.721927 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:12:19.749756 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:12:19.777031 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:12:19.804303 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:12:19.831862 1117956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:12:19.852550 1117956 ssh_runner.go:195] Run: openssl version
	I1212 00:12:19.859270 1117956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:12:19.870709 1117956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:12:19.875024 1117956 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:12:19.875099 1117956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:12:19.883356 1117956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:12:19.894678 1117956 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:12:19.898800 1117956 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 00:12:19.898847 1117956 kubeadm.go:404] StartCluster: {Name:addons-513852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-513852 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:12:19.898926 1117956 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:12:19.898996 1117956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:12:19.941002 1117956 cri.go:89] found id: ""
	I1212 00:12:19.941121 1117956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:12:19.951581 1117956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:12:19.962004 1117956 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:12:19.962068 1117956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:12:19.972318 1117956 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:12:19.972391 1117956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:12:20.031248 1117956 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 00:12:20.031564 1117956 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 00:12:20.076698 1117956 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:12:20.076787 1117956 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1212 00:12:20.076840 1117956 kubeadm.go:322] OS: Linux
	I1212 00:12:20.076889 1117956 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 00:12:20.076939 1117956 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 00:12:20.076986 1117956 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 00:12:20.077035 1117956 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 00:12:20.077084 1117956 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 00:12:20.077134 1117956 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 00:12:20.077181 1117956 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1212 00:12:20.077228 1117956 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1212 00:12:20.077288 1117956 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1212 00:12:20.166769 1117956 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:12:20.167349 1117956 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:12:20.167491 1117956 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:12:20.414887 1117956 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:12:20.417171 1117956 out.go:204]   - Generating certificates and keys ...
	I1212 00:12:20.417332 1117956 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 00:12:20.417416 1117956 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 00:12:20.776471 1117956 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:12:21.319332 1117956 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:12:21.838896 1117956 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:12:22.401007 1117956 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 00:12:23.123073 1117956 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 00:12:23.123477 1117956 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-513852 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:12:24.014299 1117956 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 00:12:24.014670 1117956 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-513852 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:12:24.725411 1117956 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:12:24.876312 1117956 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:12:25.595368 1117956 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 00:12:25.595685 1117956 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:12:25.935577 1117956 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:12:26.574571 1117956 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:12:27.215100 1117956 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:12:27.515971 1117956 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:12:27.516832 1117956 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:12:27.519493 1117956 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:12:27.521899 1117956 out.go:204]   - Booting up control plane ...
	I1212 00:12:27.522017 1117956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:12:27.522091 1117956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:12:27.522798 1117956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:12:27.535118 1117956 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:12:27.535908 1117956 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:12:27.536186 1117956 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 00:12:27.634330 1117956 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:12:34.137227 1117956 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502237 seconds
	I1212 00:12:34.137370 1117956 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:12:34.153363 1117956 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:12:34.678099 1117956 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:12:34.678299 1117956 kubeadm.go:322] [mark-control-plane] Marking the node addons-513852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:12:35.189766 1117956 kubeadm.go:322] [bootstrap-token] Using token: dlqiuc.q2dtcr4gd8ieq310
	I1212 00:12:35.191953 1117956 out.go:204]   - Configuring RBAC rules ...
	I1212 00:12:35.192069 1117956 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:12:35.196771 1117956 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:12:35.206364 1117956 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:12:35.210211 1117956 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:12:35.214010 1117956 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:12:35.218315 1117956 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:12:35.233026 1117956 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:12:35.498416 1117956 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 00:12:35.633813 1117956 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 00:12:35.634883 1117956 kubeadm.go:322] 
	I1212 00:12:35.634950 1117956 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 00:12:35.634956 1117956 kubeadm.go:322] 
	I1212 00:12:35.635028 1117956 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 00:12:35.635033 1117956 kubeadm.go:322] 
	I1212 00:12:35.635058 1117956 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 00:12:35.635113 1117956 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:12:35.635161 1117956 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:12:35.635166 1117956 kubeadm.go:322] 
	I1212 00:12:35.635223 1117956 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 00:12:35.635229 1117956 kubeadm.go:322] 
	I1212 00:12:35.635274 1117956 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:12:35.635278 1117956 kubeadm.go:322] 
	I1212 00:12:35.635327 1117956 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 00:12:35.635397 1117956 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:12:35.635461 1117956 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:12:35.635468 1117956 kubeadm.go:322] 
	I1212 00:12:35.635547 1117956 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:12:35.635619 1117956 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 00:12:35.635624 1117956 kubeadm.go:322] 
	I1212 00:12:35.635702 1117956 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dlqiuc.q2dtcr4gd8ieq310 \
	I1212 00:12:35.635799 1117956 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 \
	I1212 00:12:35.635819 1117956 kubeadm.go:322] 	--control-plane 
	I1212 00:12:35.635824 1117956 kubeadm.go:322] 
	I1212 00:12:35.635903 1117956 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:12:35.635911 1117956 kubeadm.go:322] 
	I1212 00:12:35.635988 1117956 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dlqiuc.q2dtcr4gd8ieq310 \
	I1212 00:12:35.636084 1117956 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 
	I1212 00:12:35.640258 1117956 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1212 00:12:35.640374 1117956 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:12:35.640507 1117956 cni.go:84] Creating CNI manager for ""
	I1212 00:12:35.640536 1117956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:12:35.644576 1117956 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:12:35.646621 1117956 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:12:35.662664 1117956 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:12:35.662683 1117956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:12:35.708331 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:12:36.568292 1117956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:12:36.568461 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:36.568556 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4 minikube.k8s.io/name=addons-513852 minikube.k8s.io/updated_at=2023_12_12T00_12_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:36.726594 1117956 ops.go:34] apiserver oom_adj: -16
	I1212 00:12:36.726716 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:36.831700 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:37.425395 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:37.925775 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:38.425570 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:38.925267 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:39.425165 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:39.925629 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:40.425231 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:40.925207 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:41.425401 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:41.926084 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:42.425138 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:42.925111 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:43.425419 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:43.925666 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:44.425796 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:44.925133 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:45.426010 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:45.926037 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:46.425388 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:46.925638 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:47.425599 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:47.925511 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:48.070642 1117956 kubeadm.go:1088] duration metric: took 11.502239293s to wait for elevateKubeSystemPrivileges.
	I1212 00:12:48.070673 1117956 kubeadm.go:406] StartCluster complete in 28.17182842s
	I1212 00:12:48.070690 1117956 settings.go:142] acquiring lock: {Name:mk4639df610f4394c6679c82a1803a108086063e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:48.071250 1117956 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:12:48.071631 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/kubeconfig: {Name:mk6bda1f8356012618f11e41d531a3f786e443d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:48.072867 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:12:48.073159 1117956 config.go:182] Loaded profile config "addons-513852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:12:48.073310 1117956 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1212 00:12:48.073374 1117956 addons.go:69] Setting volumesnapshots=true in profile "addons-513852"
	I1212 00:12:48.073390 1117956 addons.go:231] Setting addon volumesnapshots=true in "addons-513852"
	I1212 00:12:48.073442 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.073902 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.075151 1117956 addons.go:69] Setting ingress-dns=true in profile "addons-513852"
	I1212 00:12:48.075187 1117956 addons.go:231] Setting addon ingress-dns=true in "addons-513852"
	I1212 00:12:48.075232 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.075666 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.078271 1117956 addons.go:69] Setting inspektor-gadget=true in profile "addons-513852"
	I1212 00:12:48.078308 1117956 addons.go:231] Setting addon inspektor-gadget=true in "addons-513852"
	I1212 00:12:48.078357 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.078792 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.079167 1117956 addons.go:69] Setting cloud-spanner=true in profile "addons-513852"
	I1212 00:12:48.079189 1117956 addons.go:231] Setting addon cloud-spanner=true in "addons-513852"
	I1212 00:12:48.079230 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.079621 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.080752 1117956 addons.go:69] Setting metrics-server=true in profile "addons-513852"
	I1212 00:12:48.080784 1117956 addons.go:231] Setting addon metrics-server=true in "addons-513852"
	I1212 00:12:48.080819 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.082527 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.085987 1117956 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-513852"
	I1212 00:12:48.086054 1117956 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-513852"
	I1212 00:12:48.086095 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.086547 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.087006 1117956 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-513852"
	I1212 00:12:48.087030 1117956 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-513852"
	I1212 00:12:48.087071 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.087485 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.093064 1117956 addons.go:69] Setting registry=true in profile "addons-513852"
	I1212 00:12:48.093101 1117956 addons.go:231] Setting addon registry=true in "addons-513852"
	I1212 00:12:48.093148 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.093648 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.099799 1117956 addons.go:69] Setting default-storageclass=true in profile "addons-513852"
	I1212 00:12:48.099844 1117956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-513852"
	I1212 00:12:48.100207 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.110183 1117956 addons.go:69] Setting storage-provisioner=true in profile "addons-513852"
	I1212 00:12:48.110222 1117956 addons.go:231] Setting addon storage-provisioner=true in "addons-513852"
	I1212 00:12:48.110266 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.110712 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.124211 1117956 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-513852"
	I1212 00:12:48.124252 1117956 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-513852"
	I1212 00:12:48.124588 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.126828 1117956 addons.go:69] Setting gcp-auth=true in profile "addons-513852"
	I1212 00:12:48.126862 1117956 mustload.go:65] Loading cluster: addons-513852
	I1212 00:12:48.127057 1117956 config.go:182] Loaded profile config "addons-513852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:12:48.127401 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.141927 1117956 addons.go:69] Setting ingress=true in profile "addons-513852"
	I1212 00:12:48.141966 1117956 addons.go:231] Setting addon ingress=true in "addons-513852"
	I1212 00:12:48.142025 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.142483 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.255134 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 00:12:48.257186 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 00:12:48.257206 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 00:12:48.257317 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.293347 1117956 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1212 00:12:48.297531 1117956 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 00:12:48.297589 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1212 00:12:48.297669 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.304868 1117956 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1212 00:12:48.306784 1117956 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 00:12:48.306803 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 00:12:48.306867 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.315959 1117956 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1212 00:12:48.319841 1117956 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1212 00:12:48.320023 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 00:12:48.320120 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.354977 1117956 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1212 00:12:48.358041 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1212 00:12:48.358089 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1212 00:12:48.358176 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.374326 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 00:12:48.383587 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 00:12:48.386638 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 00:12:48.391096 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 00:12:48.385820 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:12:48.374567 1117956 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1212 00:12:48.374573 1117956 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:12:48.382178 1117956 addons.go:231] Setting addon default-storageclass=true in "addons-513852"
	I1212 00:12:48.385901 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.396878 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 00:12:48.398393 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 00:12:48.399944 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 00:12:48.398784 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.399620 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.398234 1117956 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-513852"
	I1212 00:12:48.401765 1117956 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1212 00:12:48.409655 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 00:12:48.411494 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 00:12:48.411510 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 00:12:48.411564 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.409580 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.410533 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.452998 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.462865 1117956 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-513852" context rescaled to 1 replicas
	I1212 00:12:48.462900 1117956 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:12:48.410624 1117956 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:12:48.410946 1117956 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 00:12:48.467218 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 00:12:48.467294 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.477553 1117956 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 00:12:48.474647 1117956 out.go:177] * Verifying Kubernetes components...
	I1212 00:12:48.474664 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:12:48.483621 1117956 out.go:177]   - Using image docker.io/registry:2.8.3
	I1212 00:12:48.481654 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.499718 1117956 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1212 00:12:48.497492 1117956 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 00:12:48.497553 1117956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:12:48.497634 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.498482 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.504363 1117956 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 00:12:48.504379 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1212 00:12:48.504489 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.524243 1117956 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:12:48.524279 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1212 00:12:48.524390 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.539107 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.557499 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.615104 1117956 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:12:48.615125 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:12:48.615185 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.638693 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.673175 1117956 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 00:12:48.674776 1117956 out.go:177]   - Using image docker.io/busybox:stable
	I1212 00:12:48.682657 1117956 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 00:12:48.682682 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 00:12:48.682745 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.674016 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.674951 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.713497 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.730748 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.748403 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.758481 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.804133 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1212 00:12:48.804155 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1212 00:12:48.904899 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1212 00:12:48.904923 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1212 00:12:48.940403 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 00:12:49.035446 1117956 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 00:12:49.035507 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 00:12:49.039737 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1212 00:12:49.039772 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1212 00:12:49.046405 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 00:12:49.083871 1117956 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 00:12:49.083935 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 00:12:49.095572 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 00:12:49.128273 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:12:49.138307 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 00:12:49.138368 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 00:12:49.194599 1117956 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 00:12:49.194672 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 00:12:49.203105 1117956 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 00:12:49.203166 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 00:12:49.248661 1117956 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 00:12:49.248699 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 00:12:49.259141 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1212 00:12:49.259168 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1212 00:12:49.267504 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:12:49.280583 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:12:49.347706 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 00:12:49.347731 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 00:12:49.364875 1117956 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 00:12:49.364898 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 00:12:49.368016 1117956 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 00:12:49.368038 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 00:12:49.374538 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 00:12:49.463648 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1212 00:12:49.463681 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1212 00:12:49.468569 1117956 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 00:12:49.468592 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 00:12:49.547922 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 00:12:49.547946 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 00:12:49.567320 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 00:12:49.573714 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 00:12:49.573746 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 00:12:49.693329 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 00:12:49.727590 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1212 00:12:49.727615 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1212 00:12:49.775466 1117956 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:12:49.775496 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 00:12:49.805209 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 00:12:49.805239 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 00:12:49.925543 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 00:12:49.925567 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1212 00:12:49.958326 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 00:12:49.958353 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 00:12:49.973296 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:12:50.097361 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 00:12:50.104837 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 00:12:50.104863 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 00:12:50.206610 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 00:12:50.206635 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 00:12:50.340267 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 00:12:50.340297 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 00:12:50.403923 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 00:12:50.403994 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 00:12:50.494557 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 00:12:50.494628 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 00:12:50.565874 1117956 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.166966289s)
	I1212 00:12:50.565952 1117956 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 00:12:50.566026 1117956 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.017098679s)
	I1212 00:12:50.566868 1117956 node_ready.go:35] waiting up to 6m0s for node "addons-513852" to be "Ready" ...
	I1212 00:12:50.574278 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 00:12:52.704278 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.763830059s)
	I1212 00:12:52.725208 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.678769569s)
	I1212 00:12:52.725359 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.629715078s)
	I1212 00:12:52.948177 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:12:54.132790 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.004432937s)
	I1212 00:12:54.132972 1117956 addons.go:467] Verifying addon ingress=true in "addons-513852"
	I1212 00:12:54.133030 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.758462506s)
	I1212 00:12:54.135486 1117956 out.go:177] * Verifying ingress addon...
	I1212 00:12:54.133277 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.565929417s)
	I1212 00:12:54.135575 1117956 addons.go:467] Verifying addon registry=true in "addons-513852"
	I1212 00:12:54.141300 1117956 out.go:177] * Verifying registry addon...
	I1212 00:12:54.139430 1117956 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 00:12:54.133448 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.160114301s)
	I1212 00:12:54.133494 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.036104767s)
	I1212 00:12:54.132885 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.865357489s)
	I1212 00:12:54.132950 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.852344232s)
	I1212 00:12:54.133364 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.440003321s)
	I1212 00:12:54.143179 1117956 addons.go:467] Verifying addon metrics-server=true in "addons-513852"
	W1212 00:12:54.143361 1117956 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 00:12:54.143378 1117956 retry.go:31] will retry after 372.771501ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 00:12:54.143910 1117956 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 00:12:54.154778 1117956 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 00:12:54.154858 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:54.161841 1117956 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 00:12:54.161913 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:54.164952 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:54.170396 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1212 00:12:54.172381 1117956 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1212 00:12:54.355213 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.780842585s)
	I1212 00:12:54.355286 1117956 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-513852"
	I1212 00:12:54.358766 1117956 out.go:177] * Verifying csi-hostpath-driver addon...
	I1212 00:12:54.361504 1117956 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 00:12:54.374314 1117956 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 00:12:54.374382 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:54.378237 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:54.516927 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:12:54.681267 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:54.682444 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:54.883887 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:55.179319 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:55.181377 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:55.341377 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:12:55.383243 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:55.551335 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.034359297s)
	I1212 00:12:55.669675 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:55.674681 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:55.884733 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:56.170320 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:56.174975 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:56.261698 1117956 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 00:12:56.261813 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:56.279702 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:56.382704 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:56.397855 1117956 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 00:12:56.422889 1117956 addons.go:231] Setting addon gcp-auth=true in "addons-513852"
	I1212 00:12:56.422956 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:56.423454 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:56.452694 1117956 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 00:12:56.452752 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:56.492937 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:56.651469 1117956 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 00:12:56.653121 1117956 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1212 00:12:56.655090 1117956 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 00:12:56.655110 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 00:12:56.670974 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:56.674215 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:56.730040 1117956 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 00:12:56.730066 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 00:12:56.776167 1117956 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 00:12:56.776191 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1212 00:12:56.819061 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 00:12:56.883116 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:57.170393 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:57.176333 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:57.342215 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:12:57.382849 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:57.670181 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:57.674843 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:57.940346 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:58.196930 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:58.200946 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:58.270812 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.451712747s)
	I1212 00:12:58.273307 1117956 addons.go:467] Verifying addon gcp-auth=true in "addons-513852"
	I1212 00:12:58.275820 1117956 out.go:177] * Verifying gcp-auth addon...
	I1212 00:12:58.278581 1117956 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 00:12:58.291328 1117956 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 00:12:58.291356 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:58.301862 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:58.397849 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:58.670087 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:58.674612 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:58.806179 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:58.883993 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:59.170132 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.174591 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:59.306402 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:59.383080 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:59.670305 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.674011 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:59.805919 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:59.841787 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:12:59.882739 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:00.175306 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:00.176447 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:00.305994 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:00.383987 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:00.670494 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:00.674208 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:00.806037 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:00.883700 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.173614 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:01.175998 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:01.307609 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:01.383477 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.669929 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:01.674715 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:01.805920 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:01.850060 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:01.883006 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:02.170733 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:02.177919 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:02.306029 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:02.383521 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:02.670286 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:02.674839 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:02.805094 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:02.883646 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:03.169933 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:03.174624 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:03.305649 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:03.382626 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:03.669913 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:03.674879 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:03.805941 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:03.883179 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:04.169396 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:04.174222 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:04.305344 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:04.341763 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:04.382985 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:04.669990 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:04.674731 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:04.805299 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:04.883351 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:05.170478 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:05.174330 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:05.305538 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:05.382638 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:05.669895 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:05.674382 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:05.805603 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:05.884203 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:06.174368 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:06.175101 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:06.305230 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:06.382461 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:06.669479 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:06.674055 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:06.805574 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:06.841384 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:06.882473 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:07.170045 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:07.175067 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:07.305535 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:07.382846 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:07.670361 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:07.674704 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:07.805055 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:07.882879 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:08.170384 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:08.175023 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:08.306259 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:08.383156 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:08.669954 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:08.674629 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:08.806181 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:08.841570 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:08.882812 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:09.169421 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:09.174344 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:09.305827 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:09.383468 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:09.669556 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:09.674358 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:09.805692 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:09.883399 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:10.171636 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:10.174529 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:10.305958 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:10.383427 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:10.669883 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:10.674790 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:10.805405 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:10.884335 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:11.169634 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:11.174326 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:11.305450 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:11.341355 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:11.382669 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:11.670095 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:11.674828 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:11.805988 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:11.883112 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:12.169405 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:12.174107 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:12.305208 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:12.382717 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:12.670289 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:12.675086 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:12.805465 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:12.882933 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:13.169536 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:13.174455 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:13.306124 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:13.383443 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:13.669756 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:13.674474 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:13.805647 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:13.841393 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:13.888331 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:14.169913 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:14.174488 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:14.305961 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:14.382720 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:14.670190 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:14.674868 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:14.805432 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:14.882605 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:15.169921 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:15.174983 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:15.306092 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:15.382665 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:15.670086 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:15.675284 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:15.805438 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:15.883164 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:16.170189 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:16.174943 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:16.306074 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:16.341425 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:16.385971 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:16.670569 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:16.674017 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:16.806086 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:16.883062 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:17.169462 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:17.174313 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:17.310265 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:17.382346 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:17.670137 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:17.674845 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:17.805227 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:17.883477 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:18.170506 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:18.174135 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:18.305809 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:18.383139 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:18.669725 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:18.674351 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:18.805444 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:18.841617 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:18.883142 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:19.169970 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:19.174583 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:19.305941 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:19.382377 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:19.669841 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:19.674495 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:19.806069 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:19.882758 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:20.169417 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:20.174066 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:20.306292 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:20.383059 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:20.669452 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:20.673992 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:20.806284 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:20.842952 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:20.890366 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:21.170254 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:21.175278 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:21.320440 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:21.371746 1117956 node_ready.go:49] node "addons-513852" has status "Ready":"True"
	I1212 00:13:21.371807 1117956 node_ready.go:38] duration metric: took 30.804869695s waiting for node "addons-513852" to be "Ready" ...
	I1212 00:13:21.371845 1117956 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:13:21.395414 1117956 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gvfh4" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:21.398446 1117956 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 00:13:21.398517 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:21.746659 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:21.748275 1117956 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 00:13:21.748343 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:21.826103 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:21.913645 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:22.191781 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:22.202100 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:22.308339 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:22.385332 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:22.673688 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:22.676426 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:22.806261 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:22.886604 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:22.975567 1117956 pod_ready.go:92] pod "coredns-5dd5756b68-gvfh4" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:22.975596 1117956 pod_ready.go:81] duration metric: took 1.580101524s waiting for pod "coredns-5dd5756b68-gvfh4" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:22.975614 1117956 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.002469 1117956 pod_ready.go:92] pod "etcd-addons-513852" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:23.002500 1117956 pod_ready.go:81] duration metric: took 26.879184ms waiting for pod "etcd-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.002516 1117956 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.015027 1117956 pod_ready.go:92] pod "kube-apiserver-addons-513852" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:23.015054 1117956 pod_ready.go:81] duration metric: took 12.528563ms waiting for pod "kube-apiserver-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.015067 1117956 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.021044 1117956 pod_ready.go:92] pod "kube-controller-manager-addons-513852" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:23.021069 1117956 pod_ready.go:81] duration metric: took 5.99407ms waiting for pod "kube-controller-manager-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.021083 1117956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8kkgn" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.170783 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:23.180641 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:23.305630 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:23.342713 1117956 pod_ready.go:92] pod "kube-proxy-8kkgn" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:23.342737 1117956 pod_ready.go:81] duration metric: took 321.646074ms waiting for pod "kube-proxy-8kkgn" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.342750 1117956 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.384059 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:23.669761 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:23.678381 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:23.745460 1117956 pod_ready.go:92] pod "kube-scheduler-addons-513852" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:23.745533 1117956 pod_ready.go:81] duration metric: took 402.775007ms waiting for pod "kube-scheduler-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.745560 1117956 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.806559 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:23.895182 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:24.177836 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:24.180486 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:24.306177 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:24.385746 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:24.671870 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:24.677237 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:24.810007 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:24.884830 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:25.171907 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:25.200541 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:25.306505 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:25.385729 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:25.671151 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:25.677816 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:25.809076 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:25.905072 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:26.049731 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:26.170243 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:26.175695 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:26.305222 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:26.384208 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:26.669661 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:26.675015 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:26.808378 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:26.883746 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:27.170664 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:27.176113 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:27.306370 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:27.390019 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:27.671856 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:27.680769 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:27.807107 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:27.884326 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:28.050874 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:28.170592 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:28.174575 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:28.306108 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:28.388625 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:28.670652 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:28.678889 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:28.805495 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:28.884389 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:29.170458 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:29.174894 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:29.305513 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:29.383586 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:29.670922 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:29.676453 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:29.806302 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:29.885524 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:30.051297 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:30.171706 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:30.176082 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:30.306477 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:30.384297 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:30.670972 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:30.677876 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:30.807162 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:30.887599 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:31.170576 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:31.174808 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:31.305224 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:31.383621 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:31.669818 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:31.675295 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:31.805906 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:31.884466 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:32.170263 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:32.175315 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:32.310138 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:32.384752 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:32.549505 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:32.669976 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:32.675724 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:32.806342 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:32.884023 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:33.171032 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:33.178521 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:33.306442 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:33.384454 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:33.670597 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:33.674730 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:33.805547 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:33.883763 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:34.183630 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:34.184545 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:34.307143 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:34.385034 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:34.670919 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:34.676923 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:34.805475 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:34.885086 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:35.053055 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:35.171032 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:35.175309 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:35.305664 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:35.384265 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:35.670659 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:35.675159 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:35.806203 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:35.883595 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:36.170313 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:36.176016 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:36.307815 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:36.386073 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:36.671065 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:36.676182 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:36.807106 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:36.885406 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:37.180564 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:37.183283 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:37.306056 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:37.385779 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:37.550139 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:37.671583 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:37.683599 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:37.806205 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:37.887977 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:38.177186 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:38.184178 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:38.310155 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:38.385530 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:38.670303 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:38.676577 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:38.806166 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:38.884125 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:39.171047 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:39.175437 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:39.305449 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:39.384305 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:39.550922 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:39.670673 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:39.675157 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:39.807965 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:39.885495 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:40.172004 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:40.176908 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:40.305910 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:40.384948 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:40.678671 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:40.680603 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:40.807761 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:40.884880 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:41.171579 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:41.182219 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:41.306157 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:41.388291 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:41.674616 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:41.679562 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:41.806812 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:41.885043 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:42.049617 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:42.187791 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:42.192243 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:42.306562 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:42.385714 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:42.674339 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:42.678933 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:42.806165 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:42.890816 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:43.178736 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:43.196631 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:43.306683 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:43.390817 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:43.671111 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:43.676559 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:43.807650 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:43.884947 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:44.061475 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:44.172988 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:44.177608 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:44.308637 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:44.385829 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:44.674664 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:44.679239 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:44.807128 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:44.907038 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:45.172154 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:45.177794 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:45.306425 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:45.385790 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:45.669964 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:45.675512 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:45.807227 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:45.883958 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:46.170910 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:46.176337 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:46.306840 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:46.384496 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:46.553413 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:46.670422 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:46.675384 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:46.805536 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:46.885359 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:47.170211 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:47.175763 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:47.305716 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:47.390525 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:47.669790 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:47.675248 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:47.810683 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:47.884092 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:48.170269 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:48.175696 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:48.305309 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:48.383577 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:48.553983 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:48.671279 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:48.676515 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:48.806186 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:48.884441 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:49.170764 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:49.176235 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:49.307529 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:49.392910 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:49.670768 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:49.674914 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:49.805595 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:49.884811 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:50.169902 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:50.175249 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:50.308634 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:50.384592 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:50.670541 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:50.674811 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:50.805392 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:50.884978 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:51.049720 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:51.169605 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:51.174809 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:51.305922 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:51.387970 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:51.669810 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:51.675192 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:51.805935 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:51.884992 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:52.057578 1117956 pod_ready.go:92] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:52.057652 1117956 pod_ready.go:81] duration metric: took 28.312070605s waiting for pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:52.057677 1117956 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ssl96" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:52.174457 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:52.197142 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:52.305762 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:52.384615 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:52.670827 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:52.675047 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:52.806336 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:52.884454 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:53.171131 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:53.177405 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:53.306838 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:53.389687 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:53.671272 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:53.677627 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:53.807068 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:53.885525 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:54.105174 1117956 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ssl96" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:54.170972 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:54.189756 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:54.307686 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:54.385105 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:54.670420 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:54.679154 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:54.809069 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:54.890068 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:55.170352 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:55.175551 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:55.319343 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:55.383970 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:55.604819 1117956 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ssl96" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:55.604847 1117956 pod_ready.go:81] duration metric: took 3.547149599s waiting for pod "nvidia-device-plugin-daemonset-ssl96" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:55.604894 1117956 pod_ready.go:38] duration metric: took 34.232993104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:13:55.604915 1117956 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:13:55.604942 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:13:55.605015 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:13:55.652022 1117956 cri.go:89] found id: "171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:13:55.652089 1117956 cri.go:89] found id: ""
	I1212 00:13:55.652104 1117956 logs.go:284] 1 containers: [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050]
	I1212 00:13:55.652167 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.656427 1117956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:13:55.656514 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:13:55.669914 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:55.675567 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:55.702060 1117956 cri.go:89] found id: "ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:13:55.702088 1117956 cri.go:89] found id: ""
	I1212 00:13:55.702096 1117956 logs.go:284] 1 containers: [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228]
	I1212 00:13:55.702156 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.706709 1117956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:13:55.706828 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:13:55.752511 1117956 cri.go:89] found id: "14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:13:55.752534 1117956 cri.go:89] found id: ""
	I1212 00:13:55.752542 1117956 logs.go:284] 1 containers: [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a]
	I1212 00:13:55.752601 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.757647 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:13:55.757766 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:13:55.801651 1117956 cri.go:89] found id: "7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:13:55.801677 1117956 cri.go:89] found id: ""
	I1212 00:13:55.801686 1117956 logs.go:284] 1 containers: [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06]
	I1212 00:13:55.801776 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.806000 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:55.807101 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:13:55.807197 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:13:55.862882 1117956 cri.go:89] found id: "ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:13:55.862939 1117956 cri.go:89] found id: ""
	I1212 00:13:55.862959 1117956 logs.go:284] 1 containers: [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b]
	I1212 00:13:55.863021 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.867414 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:13:55.867513 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:13:55.886837 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:55.911143 1117956 cri.go:89] found id: "dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:13:55.911166 1117956 cri.go:89] found id: ""
	I1212 00:13:55.911174 1117956 logs.go:284] 1 containers: [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e]
	I1212 00:13:55.911227 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.915609 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:13:55.915676 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:13:55.958223 1117956 cri.go:89] found id: "83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:13:55.958245 1117956 cri.go:89] found id: ""
	I1212 00:13:55.958253 1117956 logs.go:284] 1 containers: [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656]
	I1212 00:13:55.958335 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.962711 1117956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:13:55.962783 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 00:13:56.028590 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: W1212 00:12:53.721647    1352 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:13:56.028863 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:13:56.034262 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:13:56.034528 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:13:56.034750 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:13:56.034987 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:13:56.070382 1117956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:13:56.070463 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:13:56.093032 1117956 logs.go:123] Gathering logs for kube-apiserver [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050] ...
	I1212 00:13:56.093111 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:13:56.172050 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:56.185508 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:56.196299 1117956 logs.go:123] Gathering logs for kube-proxy [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b] ...
	I1212 00:13:56.196337 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:13:56.263365 1117956 logs.go:123] Gathering logs for container status ...
	I1212 00:13:56.263396 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:13:56.306106 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:56.324863 1117956 logs.go:123] Gathering logs for kindnet [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656] ...
	I1212 00:13:56.324893 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:13:56.374398 1117956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:13:56.374426 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:13:56.388787 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:56.499848 1117956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:13:56.499927 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 00:13:56.670531 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:56.676229 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:56.806148 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:56.876618 1117956 logs.go:123] Gathering logs for etcd [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228] ...
	I1212 00:13:56.876690 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:13:56.898651 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:57.107342 1117956 logs.go:123] Gathering logs for coredns [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a] ...
	I1212 00:13:57.107423 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:13:57.176502 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:57.177681 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:57.253199 1117956 logs.go:123] Gathering logs for kube-scheduler [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06] ...
	I1212 00:13:57.253290 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:13:57.308916 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:57.385729 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:57.453631 1117956 logs.go:123] Gathering logs for kube-controller-manager [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e] ...
	I1212 00:13:57.453666 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:13:57.559125 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:13:57.559158 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1212 00:13:57.559237 1117956 out.go:239] X Problems detected in kubelet:
	W1212 00:13:57.559256 1117956 out.go:239]   Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:13:57.559268 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:13:57.559279 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:13:57.559417 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:13:57.559435 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:13:57.559442 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:13:57.559454 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:57.669891 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:57.675480 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:57.806252 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:57.884611 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:58.187115 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:58.188489 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:58.306608 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:58.385502 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:58.674621 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:58.679220 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:58.806400 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:58.884417 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:59.170057 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:59.176387 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:59.306215 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:59.385746 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:59.671086 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:59.675174 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:59.807155 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:59.884351 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:00.171472 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:00.176396 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:14:00.306167 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:00.385148 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:00.670700 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:00.674877 1117956 kapi.go:107] duration metric: took 1m6.53096696s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 00:14:00.805465 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:00.885042 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:01.171813 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:01.306146 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:01.385413 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:01.674729 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:01.805980 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:01.917900 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:02.173783 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:02.315441 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:02.392132 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:02.671132 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:02.805742 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:02.884950 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:03.178620 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:03.306617 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:03.393390 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:03.670877 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:03.806178 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:03.890095 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:04.171122 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:04.307188 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:04.386968 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:04.676214 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:04.806902 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:04.885091 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:05.170745 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:05.306575 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:05.385074 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:05.674801 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:05.805686 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:05.884496 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:06.170675 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:06.306469 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:06.385422 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:06.673400 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:06.807322 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:06.884610 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:07.170577 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:07.308026 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:07.384284 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:07.561467 1117956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:14:07.578145 1117956 api_server.go:72] duration metric: took 1m19.115217049s to wait for apiserver process to appear ...
	I1212 00:14:07.578224 1117956 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:14:07.578271 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:14:07.578338 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:14:07.673684 1117956 kapi.go:107] duration metric: took 1m13.534254732s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 00:14:07.717756 1117956 cri.go:89] found id: "171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:14:07.717780 1117956 cri.go:89] found id: ""
	I1212 00:14:07.717789 1117956 logs.go:284] 1 containers: [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050]
	I1212 00:14:07.717846 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:07.734268 1117956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:14:07.734348 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:14:07.787418 1117956 cri.go:89] found id: "ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:14:07.787442 1117956 cri.go:89] found id: ""
	I1212 00:14:07.787450 1117956 logs.go:284] 1 containers: [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228]
	I1212 00:14:07.787506 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:07.792786 1117956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:14:07.792859 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:14:07.806366 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:07.852806 1117956 cri.go:89] found id: "14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:14:07.852828 1117956 cri.go:89] found id: ""
	I1212 00:14:07.852835 1117956 logs.go:284] 1 containers: [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a]
	I1212 00:14:07.852888 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:07.857909 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:14:07.857980 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:14:07.884680 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:07.925308 1117956 cri.go:89] found id: "7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:14:07.925381 1117956 cri.go:89] found id: ""
	I1212 00:14:07.925402 1117956 logs.go:284] 1 containers: [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06]
	I1212 00:14:07.925498 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:07.953431 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:14:07.953502 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:14:08.034957 1117956 cri.go:89] found id: "ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:14:08.034977 1117956 cri.go:89] found id: ""
	I1212 00:14:08.034987 1117956 logs.go:284] 1 containers: [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b]
	I1212 00:14:08.035039 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:08.046893 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:14:08.046964 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:14:08.092774 1117956 cri.go:89] found id: "dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:14:08.092795 1117956 cri.go:89] found id: ""
	I1212 00:14:08.092802 1117956 logs.go:284] 1 containers: [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e]
	I1212 00:14:08.092854 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:08.101620 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:14:08.101752 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:14:08.197927 1117956 cri.go:89] found id: "83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:14:08.197995 1117956 cri.go:89] found id: ""
	I1212 00:14:08.198016 1117956 logs.go:284] 1 containers: [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656]
	I1212 00:14:08.198101 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:08.203208 1117956 logs.go:123] Gathering logs for kube-apiserver [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050] ...
	I1212 00:14:08.203278 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:14:08.279301 1117956 logs.go:123] Gathering logs for etcd [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228] ...
	I1212 00:14:08.283035 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:14:08.305975 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:08.359864 1117956 logs.go:123] Gathering logs for coredns [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a] ...
	I1212 00:14:08.359935 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:14:08.395636 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:08.447738 1117956 logs.go:123] Gathering logs for kube-scheduler [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06] ...
	I1212 00:14:08.447820 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:14:08.516251 1117956 logs.go:123] Gathering logs for kube-proxy [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b] ...
	I1212 00:14:08.516327 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:14:08.611374 1117956 logs.go:123] Gathering logs for kube-controller-manager [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e] ...
	I1212 00:14:08.611399 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:14:08.721802 1117956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:14:08.721877 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:14:08.807824 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:14:08.811574 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: W1212 00:12:53.721647    1352 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:08.811838 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:08.817086 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:08.817871 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:08.818070 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:14:08.818292 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:14:08.861864 1117956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:14:08.861940 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:14:08.885082 1117956 logs.go:123] Gathering logs for container status ...
	I1212 00:14:08.885211 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:14:08.891460 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:08.990768 1117956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:14:08.990836 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:14:09.108590 1117956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:14:09.108674 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 00:14:09.309217 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:09.317651 1117956 logs.go:123] Gathering logs for kindnet [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656] ...
	I1212 00:14:09.317682 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:14:09.367835 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:14:09.367865 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1212 00:14:09.367917 1117956 out.go:239] X Problems detected in kubelet:
	W1212 00:14:09.367926 1117956 out.go:239]   Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:09.367933 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:09.367942 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:09.367951 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:14:09.367957 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:14:09.367968 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:14:09.367974 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:09.384628 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:09.807585 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:09.884457 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:10.305583 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:10.385259 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:10.806214 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:10.885039 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:11.305588 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:11.383942 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:11.805681 1117956 kapi.go:107] duration metric: took 1m13.527097181s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 00:14:11.807871 1117956 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-513852 cluster.
	I1212 00:14:11.810212 1117956 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 00:14:11.812100 1117956 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 00:14:11.884331 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:12.384302 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:12.884874 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:13.388004 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:13.885952 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:14.386060 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:14.884501 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:15.384544 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:15.884313 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:16.384890 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:16.884217 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:17.386089 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:17.887672 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:18.383393 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:18.884618 1117956 kapi.go:107] duration metric: took 1m24.523114093s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 00:14:18.886891 1117956 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, metrics-server, inspektor-gadget, storage-provisioner, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1212 00:14:18.888733 1117956 addons.go:502] enable addons completed in 1m30.815446312s: enabled=[ingress-dns cloud-spanner nvidia-device-plugin metrics-server inspektor-gadget storage-provisioner storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1212 00:14:19.368206 1117956 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 00:14:19.377869 1117956 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 00:14:19.379201 1117956 api_server.go:141] control plane version: v1.28.4
	I1212 00:14:19.379226 1117956 api_server.go:131] duration metric: took 11.800981103s to wait for apiserver health ...
	I1212 00:14:19.379234 1117956 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:14:19.379256 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:14:19.379346 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:14:19.424112 1117956 cri.go:89] found id: "171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:14:19.424137 1117956 cri.go:89] found id: ""
	I1212 00:14:19.424146 1117956 logs.go:284] 1 containers: [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050]
	I1212 00:14:19.424209 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.429070 1117956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:14:19.429176 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:14:19.473920 1117956 cri.go:89] found id: "ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:14:19.473947 1117956 cri.go:89] found id: ""
	I1212 00:14:19.473956 1117956 logs.go:284] 1 containers: [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228]
	I1212 00:14:19.474011 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.478305 1117956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:14:19.478375 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:14:19.525518 1117956 cri.go:89] found id: "14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:14:19.525540 1117956 cri.go:89] found id: ""
	I1212 00:14:19.525548 1117956 logs.go:284] 1 containers: [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a]
	I1212 00:14:19.525603 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.529973 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:14:19.530053 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:14:19.571845 1117956 cri.go:89] found id: "7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:14:19.571864 1117956 cri.go:89] found id: ""
	I1212 00:14:19.571872 1117956 logs.go:284] 1 containers: [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06]
	I1212 00:14:19.571936 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.576539 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:14:19.576647 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:14:19.618163 1117956 cri.go:89] found id: "ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:14:19.618247 1117956 cri.go:89] found id: ""
	I1212 00:14:19.618262 1117956 logs.go:284] 1 containers: [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b]
	I1212 00:14:19.618324 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.622701 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:14:19.622786 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:14:19.664658 1117956 cri.go:89] found id: "dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:14:19.664719 1117956 cri.go:89] found id: ""
	I1212 00:14:19.664741 1117956 logs.go:284] 1 containers: [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e]
	I1212 00:14:19.664822 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.669507 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:14:19.669623 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:14:19.720316 1117956 cri.go:89] found id: "83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:14:19.720337 1117956 cri.go:89] found id: ""
	I1212 00:14:19.720345 1117956 logs.go:284] 1 containers: [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656]
	I1212 00:14:19.720401 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.724979 1117956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:14:19.725041 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:14:19.814489 1117956 logs.go:123] Gathering logs for kube-apiserver [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050] ...
	I1212 00:14:19.814526 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:14:19.875642 1117956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:14:19.875673 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:14:19.897542 1117956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:14:19.897572 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 00:14:20.071934 1117956 logs.go:123] Gathering logs for etcd [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228] ...
	I1212 00:14:20.071968 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:14:20.138833 1117956 logs.go:123] Gathering logs for coredns [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a] ...
	I1212 00:14:20.138866 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:14:20.193487 1117956 logs.go:123] Gathering logs for kube-scheduler [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06] ...
	I1212 00:14:20.193520 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:14:20.252956 1117956 logs.go:123] Gathering logs for kube-proxy [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b] ...
	I1212 00:14:20.252988 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:14:20.300265 1117956 logs.go:123] Gathering logs for kube-controller-manager [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e] ...
	I1212 00:14:20.300294 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:14:20.379531 1117956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:14:20.379564 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 00:14:20.447800 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: W1212 00:12:53.721647    1352 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.448042 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.453317 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.453517 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.453684 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.453870 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:14:20.490481 1117956 logs.go:123] Gathering logs for container status ...
	I1212 00:14:20.490506 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:14:20.556271 1117956 logs.go:123] Gathering logs for kindnet [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656] ...
	I1212 00:14:20.556299 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:14:20.604692 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:14:20.604716 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1212 00:14:20.604762 1117956 out.go:239] X Problems detected in kubelet:
	W1212 00:14:20.604770 1117956 out.go:239]   Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.604777 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.604806 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.604822 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.604828 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:14:20.604840 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:14:20.604847 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:30.616480 1117956 system_pods.go:59] 18 kube-system pods found
	I1212 00:14:30.616518 1117956 system_pods.go:61] "coredns-5dd5756b68-gvfh4" [b1b349a6-9a5a-4c6f-91c8-c6e3b567eea0] Running
	I1212 00:14:30.616526 1117956 system_pods.go:61] "csi-hostpath-attacher-0" [a06b11fe-ad4b-470f-82d5-384e33be061a] Running
	I1212 00:14:30.616531 1117956 system_pods.go:61] "csi-hostpath-resizer-0" [ba8b9710-a94d-4d03-9bd8-aac9f2bd8984] Running
	I1212 00:14:30.616536 1117956 system_pods.go:61] "csi-hostpathplugin-8kkcd" [65e82f73-1b35-4089-9756-16699e21e0ef] Running
	I1212 00:14:30.616542 1117956 system_pods.go:61] "etcd-addons-513852" [8e75e307-2602-402c-9981-232e674486e0] Running
	I1212 00:14:30.616547 1117956 system_pods.go:61] "kindnet-d7b6k" [2c045b49-fdb0-4d3b-8508-98e082fb738a] Running
	I1212 00:14:30.616554 1117956 system_pods.go:61] "kube-apiserver-addons-513852" [9d5b6840-fab9-4266-9c77-70d15c2c9407] Running
	I1212 00:14:30.616561 1117956 system_pods.go:61] "kube-controller-manager-addons-513852" [35ba684c-0799-4a2c-80ba-591864509f6b] Running
	I1212 00:14:30.616570 1117956 system_pods.go:61] "kube-ingress-dns-minikube" [29a08ebe-149e-48f7-96e3-41c96a718619] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 00:14:30.616579 1117956 system_pods.go:61] "kube-proxy-8kkgn" [72599864-0658-4e1a-9ce3-a884c258f4a5] Running
	I1212 00:14:30.616585 1117956 system_pods.go:61] "kube-scheduler-addons-513852" [971cdcff-7a39-46e4-a7a9-7f3e10969322] Running
	I1212 00:14:30.616591 1117956 system_pods.go:61] "metrics-server-7c66d45ddc-q8k8b" [ea3981e3-770c-404a-aa8d-66a2d769677f] Running
	I1212 00:14:30.616596 1117956 system_pods.go:61] "nvidia-device-plugin-daemonset-ssl96" [97efc1d3-32a2-484f-90ee-d7d726a4211f] Running
	I1212 00:14:30.616603 1117956 system_pods.go:61] "registry-nztsx" [d6d72673-3fd0-4b6a-8d6c-7ebec393d5cf] Running
	I1212 00:14:30.616608 1117956 system_pods.go:61] "registry-proxy-v7h4s" [a63d003e-1e86-4e98-8cec-b7ede232f639] Running
	I1212 00:14:30.616616 1117956 system_pods.go:61] "snapshot-controller-58dbcc7b99-mclbz" [6da16743-7e5d-4934-8b84-d5af75a53800] Running
	I1212 00:14:30.616621 1117956 system_pods.go:61] "snapshot-controller-58dbcc7b99-q5h4c" [796392dc-a006-4217-88f5-0525e11bf20f] Running
	I1212 00:14:30.616626 1117956 system_pods.go:61] "storage-provisioner" [8223859f-1e90-4ec7-b191-0522163b4b21] Running
	I1212 00:14:30.616633 1117956 system_pods.go:74] duration metric: took 11.237392221s to wait for pod list to return data ...
	I1212 00:14:30.616642 1117956 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:14:30.619191 1117956 default_sa.go:45] found service account: "default"
	I1212 00:14:30.619217 1117956 default_sa.go:55] duration metric: took 2.565659ms for default service account to be created ...
	I1212 00:14:30.619226 1117956 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:14:30.628459 1117956 system_pods.go:86] 18 kube-system pods found
	I1212 00:14:30.628491 1117956 system_pods.go:89] "coredns-5dd5756b68-gvfh4" [b1b349a6-9a5a-4c6f-91c8-c6e3b567eea0] Running
	I1212 00:14:30.628502 1117956 system_pods.go:89] "csi-hostpath-attacher-0" [a06b11fe-ad4b-470f-82d5-384e33be061a] Running
	I1212 00:14:30.628507 1117956 system_pods.go:89] "csi-hostpath-resizer-0" [ba8b9710-a94d-4d03-9bd8-aac9f2bd8984] Running
	I1212 00:14:30.628512 1117956 system_pods.go:89] "csi-hostpathplugin-8kkcd" [65e82f73-1b35-4089-9756-16699e21e0ef] Running
	I1212 00:14:30.628517 1117956 system_pods.go:89] "etcd-addons-513852" [8e75e307-2602-402c-9981-232e674486e0] Running
	I1212 00:14:30.628522 1117956 system_pods.go:89] "kindnet-d7b6k" [2c045b49-fdb0-4d3b-8508-98e082fb738a] Running
	I1212 00:14:30.628527 1117956 system_pods.go:89] "kube-apiserver-addons-513852" [9d5b6840-fab9-4266-9c77-70d15c2c9407] Running
	I1212 00:14:30.628539 1117956 system_pods.go:89] "kube-controller-manager-addons-513852" [35ba684c-0799-4a2c-80ba-591864509f6b] Running
	I1212 00:14:30.628551 1117956 system_pods.go:89] "kube-ingress-dns-minikube" [29a08ebe-149e-48f7-96e3-41c96a718619] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 00:14:30.628562 1117956 system_pods.go:89] "kube-proxy-8kkgn" [72599864-0658-4e1a-9ce3-a884c258f4a5] Running
	I1212 00:14:30.628568 1117956 system_pods.go:89] "kube-scheduler-addons-513852" [971cdcff-7a39-46e4-a7a9-7f3e10969322] Running
	I1212 00:14:30.628573 1117956 system_pods.go:89] "metrics-server-7c66d45ddc-q8k8b" [ea3981e3-770c-404a-aa8d-66a2d769677f] Running
	I1212 00:14:30.628580 1117956 system_pods.go:89] "nvidia-device-plugin-daemonset-ssl96" [97efc1d3-32a2-484f-90ee-d7d726a4211f] Running
	I1212 00:14:30.628585 1117956 system_pods.go:89] "registry-nztsx" [d6d72673-3fd0-4b6a-8d6c-7ebec393d5cf] Running
	I1212 00:14:30.628592 1117956 system_pods.go:89] "registry-proxy-v7h4s" [a63d003e-1e86-4e98-8cec-b7ede232f639] Running
	I1212 00:14:30.628597 1117956 system_pods.go:89] "snapshot-controller-58dbcc7b99-mclbz" [6da16743-7e5d-4934-8b84-d5af75a53800] Running
	I1212 00:14:30.628602 1117956 system_pods.go:89] "snapshot-controller-58dbcc7b99-q5h4c" [796392dc-a006-4217-88f5-0525e11bf20f] Running
	I1212 00:14:30.628609 1117956 system_pods.go:89] "storage-provisioner" [8223859f-1e90-4ec7-b191-0522163b4b21] Running
	I1212 00:14:30.628616 1117956 system_pods.go:126] duration metric: took 9.384837ms to wait for k8s-apps to be running ...
	I1212 00:14:30.628623 1117956 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:14:30.628681 1117956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:14:30.642263 1117956 system_svc.go:56] duration metric: took 13.630741ms WaitForService to wait for kubelet.
	I1212 00:14:30.642291 1117956 kubeadm.go:581] duration metric: took 1m42.179368052s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:14:30.642310 1117956 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:14:30.645504 1117956 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:14:30.645537 1117956 node_conditions.go:123] node cpu capacity is 2
	I1212 00:14:30.645548 1117956 node_conditions.go:105] duration metric: took 3.232905ms to run NodePressure ...
	I1212 00:14:30.645560 1117956 start.go:228] waiting for startup goroutines ...
	I1212 00:14:30.645566 1117956 start.go:233] waiting for cluster config update ...
	I1212 00:14:30.645580 1117956 start.go:242] writing updated cluster config ...
	I1212 00:14:30.645863 1117956 ssh_runner.go:195] Run: rm -f paused
	I1212 00:14:30.983564 1117956 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 00:14:30.985843 1117956 out.go:177] * Done! kubectl is now configured to use "addons-513852" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 12 00:17:25 addons-513852 conmon[4342]: conmon 74584247c4a245721f12 <ninfo>: container 4353 exited with status 137
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.031875685Z" level=info msg="Stopped container 74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e: ingress-nginx/ingress-nginx-controller-7c6974c4d8-zbqcl/controller" id=1e90a66e-4c22-4024-98d6-3f2cb9be343e name=/runtime.v1.RuntimeService/StopContainer
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.032557577Z" level=info msg="Stopping pod sandbox: f35447a34fd7570a1855c37e5d2e93ff65fc5a9a9e02d59102e30423e8158581" id=032a2a11-4ca5-4eb9-82dc-3b56762d4a4c name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.036278353Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-ZH7UR7AMXVFA3TRJ - [0:0]\n:KUBE-HP-O72M2UVIRVGYWLIZ - [0:0]\n-X KUBE-HP-O72M2UVIRVGYWLIZ\n-X KUBE-HP-ZH7UR7AMXVFA3TRJ\nCOMMIT\n"
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.037893747Z" level=info msg="Closing host port tcp:80"
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.037935264Z" level=info msg="Closing host port tcp:443"
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.039462431Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.039492977Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.039685013Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-zbqcl Namespace:ingress-nginx ID:f35447a34fd7570a1855c37e5d2e93ff65fc5a9a9e02d59102e30423e8158581 UID:0bc9cb0b-015a-4c22-bafd-02a5359dc2a8 NetNS:/var/run/netns/0b7ae179-9a45-428f-b00b-086ec6731505 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.039833841Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-zbqcl from CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.070825545Z" level=info msg="Stopped pod sandbox: f35447a34fd7570a1855c37e5d2e93ff65fc5a9a9e02d59102e30423e8158581" id=032a2a11-4ca5-4eb9-82dc-3b56762d4a4c name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.140139360Z" level=info msg="Removing container: 74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e" id=0fada046-9531-4a76-89f0-1ba496d74ce2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:17:26 addons-513852 crio[888]: time="2023-12-12 00:17:26.157173915Z" level=info msg="Removed container 74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e: ingress-nginx/ingress-nginx-controller-7c6974c4d8-zbqcl/controller" id=0fada046-9531-4a76-89f0-1ba496d74ce2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:17:27 addons-513852 crio[888]: time="2023-12-12 00:17:27.583464423Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=d272784e-ee47-47c6-86ba-84d59fa190c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:17:27 addons-513852 crio[888]: time="2023-12-12 00:17:27.583663366Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=d272784e-ee47-47c6-86ba-84d59fa190c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:17:27 addons-513852 crio[888]: time="2023-12-12 00:17:27.585000090Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=acf80d67-bd4d-4011-8480-2faca6d17ab2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:17:27 addons-513852 crio[888]: time="2023-12-12 00:17:27.585189376Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=acf80d67-bd4d-4011-8480-2faca6d17ab2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:17:27 addons-513852 crio[888]: time="2023-12-12 00:17:27.585895226Z" level=info msg="Creating container: default/hello-world-app-5d77478584-zvbn2/hello-world-app" id=680a5859-2aa1-4235-92c5-1a294940e26b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:17:27 addons-513852 crio[888]: time="2023-12-12 00:17:27.585982608Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 12 00:17:27 addons-513852 crio[888]: time="2023-12-12 00:17:27.651552855Z" level=info msg="Created container 2eda891c56ae954ee6cc396862dd8c62ae43b1aad5166672342ce433482f110f: default/hello-world-app-5d77478584-zvbn2/hello-world-app" id=680a5859-2aa1-4235-92c5-1a294940e26b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:17:27 addons-513852 crio[888]: time="2023-12-12 00:17:27.652459519Z" level=info msg="Starting container: 2eda891c56ae954ee6cc396862dd8c62ae43b1aad5166672342ce433482f110f" id=bb66f252-2df0-4bb9-a705-c2d68952591f name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:17:27 addons-513852 conmon[6600]: conmon 2eda891c56ae954ee6cc <ninfo>: container 6611 exited with status 1
	Dec 12 00:17:27 addons-513852 crio[888]: time="2023-12-12 00:17:27.665228256Z" level=info msg="Started container" PID=6611 containerID=2eda891c56ae954ee6cc396862dd8c62ae43b1aad5166672342ce433482f110f description=default/hello-world-app-5d77478584-zvbn2/hello-world-app id=bb66f252-2df0-4bb9-a705-c2d68952591f name=/runtime.v1.RuntimeService/StartContainer sandboxID=414f7d343efbac88404391c3372686bccc0e3ddee5ee7ee10782d896d552a4b1
	Dec 12 00:17:28 addons-513852 crio[888]: time="2023-12-12 00:17:28.148069334Z" level=info msg="Removing container: 972878a0592aff06db946d8cb05f35a3d2a17fc6ad693759013c6e152421a70d" id=a585e192-766f-4bbf-b39f-388fce76a23c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:17:28 addons-513852 crio[888]: time="2023-12-12 00:17:28.170127018Z" level=info msg="Removed container 972878a0592aff06db946d8cb05f35a3d2a17fc6ad693759013c6e152421a70d: default/hello-world-app-5d77478584-zvbn2/hello-world-app" id=a585e192-766f-4bbf-b39f-388fce76a23c name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	2eda891c56ae9       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                                             3 seconds ago       Exited              hello-world-app                          2                   414f7d343efba       hello-world-app-5d77478584-zvbn2
	1de417d0f2b01       docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7                                              2 minutes ago       Running             nginx                                    0                   56707387cbc95       nginx
	ff80a687e8468       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   28f82b502238f       csi-hostpathplugin-8kkcd
	aa48e5681627c       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   28f82b502238f       csi-hostpathplugin-8kkcd
	d3c69ecfa2cb9       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   28f82b502238f       csi-hostpathplugin-8kkcd
	8c028d54e882b       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   28f82b502238f       csi-hostpathplugin-8kkcd
	4ed81cedf87de       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 3 minutes ago       Running             gcp-auth                                 0                   f9d05e6cc9d84       gcp-auth-d4c87556c-mkrlc
	5dd45b8940a9d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   28f82b502238f       csi-hostpathplugin-8kkcd
	0682e9a566810       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   3 minutes ago       Running             csi-external-health-monitor-controller   0                   28f82b502238f       csi-hostpathplugin-8kkcd
	0310b20961224       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago       Running             local-path-provisioner                   0                   315fbfb1377fb       local-path-provisioner-78b46b4d5c-t9rmh
	341011d10b321       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5                   3 minutes ago       Exited              patch                                    0                   1a375ee3a119f       ingress-nginx-admission-patch-xthzr
	4123946222b6b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   f79a314459d2d       csi-hostpath-attacher-0
	7f8eb329a0602       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              3 minutes ago       Running             csi-resizer                              0                   5721dff2406d0       csi-hostpath-resizer-0
	e02daab703b4a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5                   4 minutes ago       Exited              create                                   0                   1cdd3855f2780       ingress-nginx-admission-create-r2b7j
	771cb109d1f79       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   722863bbe27fb       snapshot-controller-58dbcc7b99-q5h4c
	251573cb85f04       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   da545940749e5       snapshot-controller-58dbcc7b99-mclbz
	14c1b0ffb4b48       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             4 minutes ago       Running             coredns                                  0                   f5c72f1476e94       coredns-5dd5756b68-gvfh4
	582f981971581       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   66e1975f4362d       storage-provisioner
	ec5053691c9ec       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                                             4 minutes ago       Running             kube-proxy                               0                   059340538b79a       kube-proxy-8kkgn
	83d3a48bf3ebf       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                                             4 minutes ago       Running             kindnet-cni                              0                   b2d7d7c1d611c       kindnet-d7b6k
	dbef07d640e56       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                                             5 minutes ago       Running             kube-controller-manager                  0                   109c1bb7bad9b       kube-controller-manager-addons-513852
	ae1f1c30ee64c       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                                             5 minutes ago       Running             etcd                                     0                   801f18c698050       etcd-addons-513852
	171aa4fbbc251       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                                             5 minutes ago       Running             kube-apiserver                           0                   186724224b3de       kube-apiserver-addons-513852
	7074dc36c6f1d       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                                             5 minutes ago       Running             kube-scheduler                           0                   cd1d45acb0857       kube-scheduler-addons-513852
	
	* 
	* ==> coredns [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a] <==
	* [INFO] 10.244.0.18:32897 - 9231 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055424s
	[INFO] 10.244.0.18:32897 - 34563 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063186s
	[INFO] 10.244.0.18:32897 - 55321 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055957s
	[INFO] 10.244.0.18:32897 - 6449 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056499s
	[INFO] 10.244.0.18:32897 - 47565 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001173693s
	[INFO] 10.244.0.18:32897 - 8782 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002206737s
	[INFO] 10.244.0.18:32897 - 55467 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062415s
	[INFO] 10.244.0.18:43938 - 59019 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000149961s
	[INFO] 10.244.0.18:53763 - 62261 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043904s
	[INFO] 10.244.0.18:43938 - 49897 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061094s
	[INFO] 10.244.0.18:53763 - 17741 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109797s
	[INFO] 10.244.0.18:43938 - 10651 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000463s
	[INFO] 10.244.0.18:43938 - 55703 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000216092s
	[INFO] 10.244.0.18:53763 - 59900 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000199641s
	[INFO] 10.244.0.18:53763 - 8372 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067263s
	[INFO] 10.244.0.18:43938 - 20165 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000177423s
	[INFO] 10.244.0.18:43938 - 10144 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063736s
	[INFO] 10.244.0.18:53763 - 26676 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000221146s
	[INFO] 10.244.0.18:53763 - 57851 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000415848s
	[INFO] 10.244.0.18:53763 - 18742 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000711741s
	[INFO] 10.244.0.18:43938 - 42275 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001552684s
	[INFO] 10.244.0.18:43938 - 28907 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002086576s
	[INFO] 10.244.0.18:43938 - 52962 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057s
	[INFO] 10.244.0.18:53763 - 41639 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002489108s
	[INFO] 10.244.0.18:53763 - 19464 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055769s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-513852
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-513852
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=addons-513852
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T00_12_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-513852
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-513852"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:12:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-513852
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:17:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:15:08 +0000   Tue, 12 Dec 2023 00:12:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:15:08 +0000   Tue, 12 Dec 2023 00:12:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:15:08 +0000   Tue, 12 Dec 2023 00:12:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:15:08 +0000   Tue, 12 Dec 2023 00:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-513852
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 57bcb8d38ea843449eabb057b789c54e
	  System UUID:                4f3fd475-4e07-4c43-9995-4e2e0466c129
	  Boot ID:                    1e71add7-2409-4eb4-97fc-c7110220f3c5
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-zvbn2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  default                     test-local-path                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-d4c87556c-mkrlc                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 coredns-5dd5756b68-gvfh4                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m43s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 csi-hostpathplugin-8kkcd                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 etcd-addons-513852                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m56s
	  kube-system                 kindnet-d7b6k                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m44s
	  kube-system                 kube-apiserver-addons-513852               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-controller-manager-addons-513852      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-proxy-8kkgn                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-scheduler-addons-513852               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 snapshot-controller-58dbcc7b99-mclbz       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 snapshot-controller-58dbcc7b99-q5h4c       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  local-path-storage          local-path-provisioner-78b46b4d5c-t9rmh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m37s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m3s (x8 over 5m3s)  kubelet          Node addons-513852 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s (x8 over 5m3s)  kubelet          Node addons-513852 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s (x8 over 5m3s)  kubelet          Node addons-513852 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m56s                kubelet          Node addons-513852 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s                kubelet          Node addons-513852 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s                kubelet          Node addons-513852 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m44s                node-controller  Node addons-513852 event: Registered Node addons-513852 in Controller
	  Normal  NodeReady                4m10s                kubelet          Node addons-513852 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001096] FS-Cache: O-key=[8] '51613b0000000000'
	[  +0.000797] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001058] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=000000009ed47378
	[  +0.001097] FS-Cache: N-key=[8] '51613b0000000000'
	[  +0.004696] FS-Cache: Duplicate cookie detected
	[  +0.000742] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000006ac44817
	[  +0.001133] FS-Cache: O-key=[8] '51613b0000000000'
	[  +0.000752] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001015] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=00000000b962c00a
	[  +0.001103] FS-Cache: N-key=[8] '51613b0000000000'
	[  +3.096598] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000002fc1e9d2
	[  +0.001145] FS-Cache: O-key=[8] '50613b0000000000'
	[  +0.000744] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000970] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=000000009ed47378
	[  +0.001095] FS-Cache: N-key=[8] '50613b0000000000'
	[  +0.330575] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=00000000caee5792
	[  +0.001154] FS-Cache: O-key=[8] '56613b0000000000'
	[  +0.000744] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=0000000001854e73
	[  +0.001084] FS-Cache: N-key=[8] '56613b0000000000'
	
	* 
	* ==> etcd [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228] <==
	* {"level":"info","ts":"2023-12-12T00:12:29.681432Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:12:29.681479Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:12:48.963599Z","caller":"traceutil/trace.go:171","msg":"trace[675986471] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"108.272083ms","start":"2023-12-12T00:12:48.855312Z","end":"2023-12-12T00:12:48.963584Z","steps":["trace[675986471] 'process raft request'  (duration: 108.184799ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T00:12:49.040559Z","caller":"traceutil/trace.go:171","msg":"trace[962685417] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"171.876794ms","start":"2023-12-12T00:12:48.868563Z","end":"2023-12-12T00:12:49.04044Z","steps":["trace[962685417] 'process raft request'  (duration: 168.850734ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T00:12:49.043632Z","caller":"traceutil/trace.go:171","msg":"trace[1158355067] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"160.215882ms","start":"2023-12-12T00:12:48.883402Z","end":"2023-12-12T00:12:49.043618Z","steps":["trace[1158355067] 'process raft request'  (duration: 156.879202ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T00:12:49.754622Z","caller":"traceutil/trace.go:171","msg":"trace[307929165] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"128.71364ms","start":"2023-12-12T00:12:49.625893Z","end":"2023-12-12T00:12:49.754606Z","steps":["trace[307929165] 'process raft request'  (duration: 128.594407ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T00:12:51.068403Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"311.478003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T00:12:51.091794Z","caller":"traceutil/trace.go:171","msg":"trace[1239514687] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:406; }","duration":"334.878339ms","start":"2023-12-12T00:12:50.756893Z","end":"2023-12-12T00:12:51.091771Z","steps":["trace[1239514687] 'agreement among raft nodes before linearized reading'  (duration: 30.499317ms)","trace[1239514687] 'range keys from in-memory index tree'  (duration: 280.954784ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.091848Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T00:12:50.75688Z","time spent":"334.946785ms","remote":"127.0.0.1:53860","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-12-12T00:12:51.186039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.850258ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025751547152546 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/certificate-controller\" mod_revision:239 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" value_size:139 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T00:12:51.200802Z","caller":"traceutil/trace.go:171","msg":"trace[847282624] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"443.791298ms","start":"2023-12-12T00:12:50.756974Z","end":"2023-12-12T00:12:51.200765Z","steps":["trace[847282624] 'process raft request'  (duration: 30.75357ms)","trace[847282624] 'compare'  (duration: 212.020476ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.200946Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T00:12:50.756964Z","time spent":"443.909703ms","remote":"127.0.0.1:54036","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":207,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/certificate-controller\" mod_revision:239 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" value_size:139 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" > >"}
	{"level":"info","ts":"2023-12-12T00:12:51.226382Z","caller":"traceutil/trace.go:171","msg":"trace[396054803] linearizableReadLoop","detail":"{readStateIndex:418; appliedIndex:417; }","duration":"227.250669ms","start":"2023-12-12T00:12:50.999109Z","end":"2023-12-12T00:12:51.22636Z","steps":["trace[396054803] 'read index received'  (duration: 299.437µs)","trace[396054803] 'applied index is now lower than readState.Index'  (duration: 226.947416ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.302025Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.922647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-513852\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2023-12-12T00:12:51.314521Z","caller":"traceutil/trace.go:171","msg":"trace[409684636] range","detail":"{range_begin:/registry/minions/addons-513852; range_end:; response_count:1; response_revision:407; }","duration":"315.41615ms","start":"2023-12-12T00:12:50.999083Z","end":"2023-12-12T00:12:51.314499Z","steps":["trace[409684636] 'agreement among raft nodes before linearized reading'  (duration: 227.998995ms)","trace[409684636] 'range keys from in-memory index tree'  (duration: 74.881183ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.306287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.09872ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025751547152548 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kindnet-d7b6k.179fed2efbb7a0b1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-d7b6k.179fed2efbb7a0b1\" value_size:630 lease:8128025751547151790 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T00:12:51.314946Z","caller":"traceutil/trace.go:171","msg":"trace[190259492] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"223.479843ms","start":"2023-12-12T00:12:51.091458Z","end":"2023-12-12T00:12:51.314938Z","steps":["trace[190259492] 'process raft request'  (duration: 223.399247ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T00:12:51.319673Z","caller":"traceutil/trace.go:171","msg":"trace[653067686] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"251.562609ms","start":"2023-12-12T00:12:51.068094Z","end":"2023-12-12T00:12:51.319657Z","steps":["trace[653067686] 'process raft request'  (duration: 128.049085ms)","trace[653067686] 'compare'  (duration: 105.796111ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.314792Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T00:12:50.999045Z","time spent":"315.720281ms","remote":"127.0.0.1:54008","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":5767,"request content":"key:\"/registry/minions/addons-513852\" "}
	{"level":"warn","ts":"2023-12-12T00:12:51.892819Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.136689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-12-12T00:12:51.905677Z","caller":"traceutil/trace.go:171","msg":"trace[552383047] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:413; }","duration":"159.999937ms","start":"2023-12-12T00:12:51.745663Z","end":"2023-12-12T00:12:51.905663Z","steps":["trace[552383047] 'agreement among raft nodes before linearized reading'  (duration: 60.646448ms)","trace[552383047] 'range keys from in-memory index tree'  (duration: 86.456248ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.905439Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.80366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T00:12:51.906022Z","caller":"traceutil/trace.go:171","msg":"trace[622420802] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:413; }","duration":"160.401952ms","start":"2023-12-12T00:12:51.745611Z","end":"2023-12-12T00:12:51.906013Z","steps":["trace[622420802] 'agreement among raft nodes before linearized reading'  (duration: 60.717134ms)","trace[622420802] 'range keys from in-memory index tree'  (duration: 99.074358ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.905462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.599555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-12-12T00:12:51.906102Z","caller":"traceutil/trace.go:171","msg":"trace[844617221] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:413; }","duration":"160.244353ms","start":"2023-12-12T00:12:51.74585Z","end":"2023-12-12T00:12:51.906095Z","steps":["trace[844617221] 'agreement among raft nodes before linearized reading'  (duration: 60.444543ms)","trace[844617221] 'range keys from in-memory index tree'  (duration: 99.126894ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [4ed81cedf87dec99686adf2b83b9050047b670a0deeda2400f065d9d5dd5519a] <==
	* 2023/12/12 00:14:11 GCP Auth Webhook started!
	2023/12/12 00:14:41 Ready to marshal response ...
	2023/12/12 00:14:41 Ready to write response ...
	2023/12/12 00:14:46 Ready to marshal response ...
	2023/12/12 00:14:46 Ready to write response ...
	2023/12/12 00:15:00 Ready to marshal response ...
	2023/12/12 00:15:00 Ready to write response ...
	2023/12/12 00:15:00 Ready to marshal response ...
	2023/12/12 00:15:00 Ready to write response ...
	2023/12/12 00:17:05 Ready to marshal response ...
	2023/12/12 00:17:05 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  00:17:31 up  6:59,  0 users,  load average: 0.43, 0.90, 0.61
	Linux addons-513852 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656] <==
	* I1212 00:15:31.041144       1 main.go:227] handling current node
	I1212 00:15:41.053726       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:41.053755       1 main.go:227] handling current node
	I1212 00:15:51.058313       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:51.058341       1 main.go:227] handling current node
	I1212 00:16:01.070124       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:16:01.070162       1 main.go:227] handling current node
	I1212 00:16:11.074663       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:16:11.074692       1 main.go:227] handling current node
	I1212 00:16:21.079400       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:16:21.079431       1 main.go:227] handling current node
	I1212 00:16:31.089533       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:16:31.089563       1 main.go:227] handling current node
	I1212 00:16:41.093994       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:16:41.094026       1 main.go:227] handling current node
	I1212 00:16:51.103029       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:16:51.103058       1 main.go:227] handling current node
	I1212 00:17:01.107350       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:17:01.107381       1 main.go:227] handling current node
	I1212 00:17:11.112431       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:17:11.112545       1 main.go:227] handling current node
	I1212 00:17:21.117350       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:17:21.117692       1 main.go:227] handling current node
	I1212 00:17:31.129769       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:17:31.129793       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 00:12:57.833418       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.237.27"}
	W1212 00:13:21.303813       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.237.27:443: connect: connection refused
	E1212 00:13:21.304432       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.237.27:443: connect: connection refused
	W1212 00:13:21.304999       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.237.27:443: connect: connection refused
	E1212 00:13:21.305067       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.237.27:443: connect: connection refused
	W1212 00:13:21.380084       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.237.27:443: connect: connection refused
	E1212 00:13:21.380223       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.237.27:443: connect: connection refused
	I1212 00:13:32.213849       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 00:13:52.037129       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 00:13:52.037205       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 00:13:52.038096       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 00:13:52.038947       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.192.0:443: connect: connection refused
	E1212 00:13:52.039765       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.192.0:443: connect: connection refused
	E1212 00:13:52.046484       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.192.0:443: connect: connection refused
	I1212 00:13:52.229239       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 00:14:32.217910       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 00:14:46.180945       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 00:14:46.519267       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.246.84"}
	I1212 00:14:48.402186       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1212 00:14:48.417603       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1212 00:14:49.434212       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1212 00:14:53.073675       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1212 00:17:05.576396       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.73.0"}
	
	* 
	* ==> kube-controller-manager [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e] <==
	* W1212 00:15:07.909557       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:15:07.909590       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 00:15:17.391543       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1212 00:15:17.391584       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:15:17.865773       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1212 00:15:17.865988       1 shared_informer.go:318] Caches are synced for garbage collector
	W1212 00:15:33.056026       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:15:33.056059       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 00:16:11.260965       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:16:11.260997       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 00:17:05.256774       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1212 00:17:05.290694       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-zvbn2"
	I1212 00:17:05.298203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.237064ms"
	I1212 00:17:05.306478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="8.033755ms"
	I1212 00:17:05.306632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.487µs"
	I1212 00:17:05.332096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="100.075µs"
	W1212 00:17:08.934358       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:17:08.934394       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 00:17:15.129221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.122µs"
	I1212 00:17:16.140874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="202.184µs"
	I1212 00:17:17.131247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="71.038µs"
	I1212 00:17:22.853502       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1212 00:17:22.853855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="63.957µs"
	I1212 00:17:22.858606       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1212 00:17:28.167538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="101.79µs"
	
	* 
	* ==> kube-proxy [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b] <==
	* I1212 00:12:53.482323       1 server_others.go:69] "Using iptables proxy"
	I1212 00:12:53.564308       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1212 00:12:53.629018       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:12:53.639669       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:12:53.639775       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:12:53.639806       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:12:53.639908       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:12:53.640208       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:12:53.640392       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:12:53.641320       1 config.go:188] "Starting service config controller"
	I1212 00:12:53.641411       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:12:53.641462       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:12:53.641493       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:12:53.642035       1 config.go:315] "Starting node config controller"
	I1212 00:12:53.642226       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:12:53.742606       1 shared_informer.go:318] Caches are synced for node config
	I1212 00:12:53.751432       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:12:53.757640       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06] <==
	* W1212 00:12:32.615189       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:32.615204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:32.615258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 00:12:32.615272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 00:12:32.615323       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:32.615337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:32.615391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:12:32.615405       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 00:12:32.615462       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 00:12:32.615478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 00:12:32.615534       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:32.615549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:32.615608       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:32.615623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:33.420377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:12:33.420413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 00:12:33.435224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 00:12:33.435335       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 00:12:33.473986       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:33.474086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:33.487719       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:12:33.487748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 00:12:33.530959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:33.530993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1212 00:12:34.200812       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 00:17:17 addons-513852 kubelet[1352]: E1212 00:17:17.116942    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zvbn2_default(27ad768a-ef86-4e41-b7b9-73dcb1adcf03)\"" pod="default/hello-world-app-5d77478584-zvbn2" podUID="27ad768a-ef86-4e41-b7b9-73dcb1adcf03"
	Dec 12 00:17:21 addons-513852 kubelet[1352]: I1212 00:17:21.548411    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrdql\" (UniqueName: \"kubernetes.io/projected/29a08ebe-149e-48f7-96e3-41c96a718619-kube-api-access-xrdql\") pod \"29a08ebe-149e-48f7-96e3-41c96a718619\" (UID: \"29a08ebe-149e-48f7-96e3-41c96a718619\") "
	Dec 12 00:17:21 addons-513852 kubelet[1352]: I1212 00:17:21.550746    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a08ebe-149e-48f7-96e3-41c96a718619-kube-api-access-xrdql" (OuterVolumeSpecName: "kube-api-access-xrdql") pod "29a08ebe-149e-48f7-96e3-41c96a718619" (UID: "29a08ebe-149e-48f7-96e3-41c96a718619"). InnerVolumeSpecName "kube-api-access-xrdql". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 00:17:21 addons-513852 kubelet[1352]: I1212 00:17:21.649239    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xrdql\" (UniqueName: \"kubernetes.io/projected/29a08ebe-149e-48f7-96e3-41c96a718619-kube-api-access-xrdql\") on node \"addons-513852\" DevicePath \"\""
	Dec 12 00:17:22 addons-513852 kubelet[1352]: I1212 00:17:22.128137    1352 scope.go:117] "RemoveContainer" containerID="d6a18022c2f005c9e14cd7cec1185e3c863c15e58308335333751db40ccb2215"
	Dec 12 00:17:22 addons-513852 kubelet[1352]: E1212 00:17:22.583831    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="7c63f77a-c315-489d-8fed-a3446132fc8a"
	Dec 12 00:17:22 addons-513852 kubelet[1352]: E1212 00:17:22.642961    1352 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/66602d4ab7a49682d800ab83b29e8a143aa7c73fce77e54db5959ad976702a2d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/66602d4ab7a49682d800ab83b29e8a143aa7c73fce77e54db5959ad976702a2d/diff: no such file or directory, extraDiskErr: <nil>
	Dec 12 00:17:23 addons-513852 kubelet[1352]: I1212 00:17:23.584670    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="01c19fff-c5b3-44c5-bfbe-a2fc39fd9f57" path="/var/lib/kubelet/pods/01c19fff-c5b3-44c5-bfbe-a2fc39fd9f57/volumes"
	Dec 12 00:17:23 addons-513852 kubelet[1352]: I1212 00:17:23.585729    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="29a08ebe-149e-48f7-96e3-41c96a718619" path="/var/lib/kubelet/pods/29a08ebe-149e-48f7-96e3-41c96a718619/volumes"
	Dec 12 00:17:23 addons-513852 kubelet[1352]: I1212 00:17:23.586266    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="35e54f4b-4a8d-42bd-aadd-8a4a8a09881c" path="/var/lib/kubelet/pods/35e54f4b-4a8d-42bd-aadd-8a4a8a09881c/volumes"
	Dec 12 00:17:26 addons-513852 kubelet[1352]: I1212 00:17:26.138738    1352 scope.go:117] "RemoveContainer" containerID="74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e"
	Dec 12 00:17:26 addons-513852 kubelet[1352]: I1212 00:17:26.157456    1352 scope.go:117] "RemoveContainer" containerID="74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e"
	Dec 12 00:17:26 addons-513852 kubelet[1352]: E1212 00:17:26.157848    1352 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e\": container with ID starting with 74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e not found: ID does not exist" containerID="74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e"
	Dec 12 00:17:26 addons-513852 kubelet[1352]: I1212 00:17:26.157896    1352 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e"} err="failed to get container status \"74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e\": rpc error: code = NotFound desc = could not find container \"74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e\": container with ID starting with 74584247c4a245721f12d8f52c79f99bff31f1056d52023f2ae49e8b3280c94e not found: ID does not exist"
	Dec 12 00:17:26 addons-513852 kubelet[1352]: I1212 00:17:26.181499    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0bc9cb0b-015a-4c22-bafd-02a5359dc2a8-webhook-cert\") pod \"0bc9cb0b-015a-4c22-bafd-02a5359dc2a8\" (UID: \"0bc9cb0b-015a-4c22-bafd-02a5359dc2a8\") "
	Dec 12 00:17:26 addons-513852 kubelet[1352]: I1212 00:17:26.181574    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdkmw\" (UniqueName: \"kubernetes.io/projected/0bc9cb0b-015a-4c22-bafd-02a5359dc2a8-kube-api-access-pdkmw\") pod \"0bc9cb0b-015a-4c22-bafd-02a5359dc2a8\" (UID: \"0bc9cb0b-015a-4c22-bafd-02a5359dc2a8\") "
	Dec 12 00:17:26 addons-513852 kubelet[1352]: I1212 00:17:26.184335    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc9cb0b-015a-4c22-bafd-02a5359dc2a8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0bc9cb0b-015a-4c22-bafd-02a5359dc2a8" (UID: "0bc9cb0b-015a-4c22-bafd-02a5359dc2a8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 00:17:26 addons-513852 kubelet[1352]: I1212 00:17:26.185906    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc9cb0b-015a-4c22-bafd-02a5359dc2a8-kube-api-access-pdkmw" (OuterVolumeSpecName: "kube-api-access-pdkmw") pod "0bc9cb0b-015a-4c22-bafd-02a5359dc2a8" (UID: "0bc9cb0b-015a-4c22-bafd-02a5359dc2a8"). InnerVolumeSpecName "kube-api-access-pdkmw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 00:17:26 addons-513852 kubelet[1352]: I1212 00:17:26.282273    1352 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0bc9cb0b-015a-4c22-bafd-02a5359dc2a8-webhook-cert\") on node \"addons-513852\" DevicePath \"\""
	Dec 12 00:17:26 addons-513852 kubelet[1352]: I1212 00:17:26.282321    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pdkmw\" (UniqueName: \"kubernetes.io/projected/0bc9cb0b-015a-4c22-bafd-02a5359dc2a8-kube-api-access-pdkmw\") on node \"addons-513852\" DevicePath \"\""
	Dec 12 00:17:27 addons-513852 kubelet[1352]: I1212 00:17:27.582576    1352 scope.go:117] "RemoveContainer" containerID="972878a0592aff06db946d8cb05f35a3d2a17fc6ad693759013c6e152421a70d"
	Dec 12 00:17:27 addons-513852 kubelet[1352]: I1212 00:17:27.583964    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0bc9cb0b-015a-4c22-bafd-02a5359dc2a8" path="/var/lib/kubelet/pods/0bc9cb0b-015a-4c22-bafd-02a5359dc2a8/volumes"
	Dec 12 00:17:28 addons-513852 kubelet[1352]: I1212 00:17:28.146046    1352 scope.go:117] "RemoveContainer" containerID="972878a0592aff06db946d8cb05f35a3d2a17fc6ad693759013c6e152421a70d"
	Dec 12 00:17:28 addons-513852 kubelet[1352]: I1212 00:17:28.146363    1352 scope.go:117] "RemoveContainer" containerID="2eda891c56ae954ee6cc396862dd8c62ae43b1aad5166672342ce433482f110f"
	Dec 12 00:17:28 addons-513852 kubelet[1352]: E1212 00:17:28.146651    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zvbn2_default(27ad768a-ef86-4e41-b7b9-73dcb1adcf03)\"" pod="default/hello-world-app-5d77478584-zvbn2" podUID="27ad768a-ef86-4e41-b7b9-73dcb1adcf03"
	
	* 
	* ==> storage-provisioner [582f981971581f27a18ecff9abba1e059d9b6df136537998c5f2f99c23aeb845] <==
	* I1212 00:13:22.065137       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:13:22.099145       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:13:22.099228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:13:22.174350       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:13:22.175830       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-513852_5c42c213-ab15-47c4-9420-9dcb09a350b9!
	I1212 00:13:22.194450       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aac062d1-c946-48d9-b1db-64590d74d0c4", APIVersion:"v1", ResourceVersion:"876", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-513852_5c42c213-ab15-47c4-9420-9dcb09a350b9 became leader
	I1212 00:13:22.276893       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-513852_5c42c213-ab15-47c4-9420-9dcb09a350b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-513852 -n addons-513852
helpers_test.go:261: (dbg) Run:  kubectl --context addons-513852 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: test-local-path
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-513852 describe pod test-local-path
helpers_test.go:282: (dbg) kubectl --context addons-513852 describe pod test-local-path:

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-513852/192.168.49.2
	Start Time:       Tue, 12 Dec 2023 00:15:05 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rs7qj (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-rs7qj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m27s                default-scheduler  Successfully assigned default/test-local-path to addons-513852
	  Warning  Failed     117s                 kubelet            Failed to pull image "busybox:stable": loading manifest for target platform: reading manifest sha256:1e190d3f03348e063cf58d643c2b39bed38f19d77a3accf616a0f53460671358 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    51s (x3 over 2m27s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     21s (x3 over 117s)   kubelet            Error: ErrImagePull
	  Warning  Failed     21s (x2 over 75s)    kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    10s (x3 over 116s)   kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     10s (x3 over 116s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (167.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (371.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 4.878821ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-513852 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-513852 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c44c9440-0c5a-45d9-bb4e-13c9f53d6c3a] Pending
helpers_test.go:344: "task-pv-pod" [c44c9440-0c5a-45d9-bb4e-13c9f53d6c3a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
addons_test.go:578: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:578: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-513852 -n addons-513852
addons_test.go:578: TestAddons/parallel/CSI: showing logs for failed pods as of 2023-12-12 00:23:53.025044807 +0000 UTC m=+776.380031173
addons_test.go:578: (dbg) Run:  kubectl --context addons-513852 describe po task-pv-pod -n default
addons_test.go:578: (dbg) kubectl --context addons-513852 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-513852/192.168.49.2
Start Time:       Tue, 12 Dec 2023 00:17:52 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.26
IPs:
IP:  10.244.0.26
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v6hnq (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-v6hnq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m1s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-513852
Warning  Failed     3m59s (x2 over 5m30s)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    2m17s (x4 over 6m1s)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     95s (x4 over 5m30s)    kubelet            Error: ErrImagePull
Warning  Failed     95s (x2 over 3m6s)     kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     81s (x6 over 5m30s)    kubelet            Error: ImagePullBackOff
Normal   BackOff    56s (x8 over 5m30s)    kubelet            Back-off pulling image "docker.io/nginx"
addons_test.go:578: (dbg) Run:  kubectl --context addons-513852 logs task-pv-pod -n default
addons_test.go:578: (dbg) Non-zero exit: kubectl --context addons-513852 logs task-pv-pod -n default: exit status 1 (122.557947ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:578: kubectl --context addons-513852 logs task-pv-pod -n default: exit status 1
addons_test.go:579: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-513852
helpers_test.go:235: (dbg) docker inspect addons-513852:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef",
	        "Created": "2023-12-12T00:12:12.845410053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1118408,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:12:13.167413464Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef/hosts",
	        "LogPath": "/var/lib/docker/containers/ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef/ce2d53620b64e156938dec5a07dcf4ce9ce60732763a7a769f51e71c667ffeef-json.log",
	        "Name": "/addons-513852",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-513852:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-513852",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a1c74c3ba85f1c0bb9c17328adca6839f763072fd13b4edd025f8ad800a85c44-init/diff:/var/lib/docker/overlay2/c2a4fdcea722509eecd2151e38f63a7bf15f9db138183afe352dd4d4bae4600f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a1c74c3ba85f1c0bb9c17328adca6839f763072fd13b4edd025f8ad800a85c44/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a1c74c3ba85f1c0bb9c17328adca6839f763072fd13b4edd025f8ad800a85c44/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a1c74c3ba85f1c0bb9c17328adca6839f763072fd13b4edd025f8ad800a85c44/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-513852",
	                "Source": "/var/lib/docker/volumes/addons-513852/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-513852",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-513852",
	                "name.minikube.sigs.k8s.io": "addons-513852",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f065a0147216bc31a78d162befc74d6c0ea3d9202fa33ad349a1269cf8c8a082",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34010"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34009"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34006"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34008"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34007"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f065a0147216",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-513852": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ce2d53620b64",
	                        "addons-513852"
	                    ],
	                    "NetworkID": "5d39f67815fb8bc7c9d433babd97a1dbd454bd30553390a89917be64d14a1586",
	                    "EndpointID": "2e5d6648e00b7aa54def1f7f1bb9a7eb9650373e02bc52823e8d931ae9c2b24c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-513852 -n addons-513852
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-513852 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-513852 logs -n 25: (1.587312911s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-661903   | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |                     |
	|         | -p download-only-661903              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-661903   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | -p download-only-661903              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-661903   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | -p download-only-661903              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| delete  | -p download-only-661903              | download-only-661903   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| delete  | -p download-only-661903              | download-only-661903   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| start   | --download-only -p                   | download-docker-765600 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | download-docker-765600               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-765600            | download-docker-765600 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| start   | --download-only -p                   | binary-mirror-675945   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | binary-mirror-675945                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41867               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-675945              | binary-mirror-675945   | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:11 UTC |
	| addons  | enable dashboard -p                  | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | addons-513852                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |                     |
	|         | addons-513852                        |                        |         |         |                     |                     |
	| start   | -p addons-513852 --wait=true         | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC | 12 Dec 23 00:14 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | -p addons-513852                     |                        |         |         |                     |                     |
	| addons  | addons-513852 addons                 | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-513852 ip                     | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	| addons  | addons-513852 addons disable         | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | addons-513852                        |                        |         |         |                     |                     |
	| ssh     | addons-513852 ssh curl -s            | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:14 UTC | 12 Dec 23 00:14 UTC |
	|         | addons-513852                        |                        |         |         |                     |                     |
	| ip      | addons-513852 ip                     | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:17 UTC | 12 Dec 23 00:17 UTC |
	| addons  | addons-513852 addons disable         | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:17 UTC | 12 Dec 23 00:17 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-513852 addons disable         | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:17 UTC | 12 Dec 23 00:17 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-513852          | jenkins | v1.32.0 | 12 Dec 23 00:17 UTC | 12 Dec 23 00:17 UTC |
	|         | -p addons-513852                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:11:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:49.980348 1117956 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:11:49.980505 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:49.980515 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:11:49.980522 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:49.980797 1117956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:11:49.981227 1117956 out.go:303] Setting JSON to false
	I1212 00:11:49.982106 1117956 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24856,"bootTime":1702315054,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:11:49.982185 1117956 start.go:138] virtualization:  
	I1212 00:11:49.984397 1117956 out.go:177] * [addons-513852] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:11:49.987330 1117956 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:11:49.989320 1117956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:49.987465 1117956 notify.go:220] Checking for updates...
	I1212 00:11:49.992478 1117956 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:11:49.994441 1117956 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:11:49.996319 1117956 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:11:49.998290 1117956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:11:50.003522 1117956 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:11:50.029057 1117956 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:11:50.029180 1117956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:50.117727 1117956 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-12 00:11:50.108281719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:50.117855 1117956 docker.go:295] overlay module found
	I1212 00:11:50.121000 1117956 out.go:177] * Using the docker driver based on user configuration
	I1212 00:11:50.123235 1117956 start.go:298] selected driver: docker
	I1212 00:11:50.123260 1117956 start.go:902] validating driver "docker" against <nil>
	I1212 00:11:50.123282 1117956 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:11:50.123895 1117956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:50.189471 1117956 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-12 00:11:50.179958072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:50.189618 1117956 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:11:50.189844 1117956 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:11:50.192110 1117956 out.go:177] * Using Docker driver with root privileges
	I1212 00:11:50.193808 1117956 cni.go:84] Creating CNI manager for ""
	I1212 00:11:50.193832 1117956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:11:50.193844 1117956 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:11:50.193858 1117956 start_flags.go:323] config:
	{Name:addons-513852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-513852 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:11:50.196055 1117956 out.go:177] * Starting control plane node addons-513852 in cluster addons-513852
	I1212 00:11:50.197869 1117956 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 00:11:50.199533 1117956 out.go:177] * Pulling base image ...
	I1212 00:11:50.201391 1117956 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:11:50.201455 1117956 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1212 00:11:50.201467 1117956 cache.go:56] Caching tarball of preloaded images
	I1212 00:11:50.201487 1117956 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:11:50.201562 1117956 preload.go:174] Found /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:11:50.201572 1117956 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 00:11:50.201918 1117956 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/config.json ...
	I1212 00:11:50.201947 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/config.json: {Name:mk7a236300fb3ff19195b124fc742b2f1a01fa4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:50.218590 1117956 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:11:50.218695 1117956 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory
	I1212 00:11:50.218729 1117956 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory, skipping pull
	I1212 00:11:50.218737 1117956 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in cache, skipping pull
	I1212 00:11:50.218744 1117956 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 as a tarball
	I1212 00:11:50.218750 1117956 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 from local cache
	I1212 00:12:05.913714 1117956 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 from cached tarball
	I1212 00:12:05.913751 1117956 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:12:05.913816 1117956 start.go:365] acquiring machines lock for addons-513852: {Name:mk7c3507316ea70dea507396c4d038034300e987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:12:05.914522 1117956 start.go:369] acquired machines lock for "addons-513852" in 683.032µs
	I1212 00:12:05.914556 1117956 start.go:93] Provisioning new machine with config: &{Name:addons-513852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-513852 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:12:05.914649 1117956 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:12:05.916943 1117956 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1212 00:12:05.917189 1117956 start.go:159] libmachine.API.Create for "addons-513852" (driver="docker")
	I1212 00:12:05.917218 1117956 client.go:168] LocalClient.Create starting
	I1212 00:12:05.917346 1117956 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem
	I1212 00:12:06.434649 1117956 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem
	I1212 00:12:06.615409 1117956 cli_runner.go:164] Run: docker network inspect addons-513852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:12:06.631768 1117956 cli_runner.go:211] docker network inspect addons-513852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:12:06.631859 1117956 network_create.go:281] running [docker network inspect addons-513852] to gather additional debugging logs...
	I1212 00:12:06.631880 1117956 cli_runner.go:164] Run: docker network inspect addons-513852
	W1212 00:12:06.649038 1117956 cli_runner.go:211] docker network inspect addons-513852 returned with exit code 1
	I1212 00:12:06.649071 1117956 network_create.go:284] error running [docker network inspect addons-513852]: docker network inspect addons-513852: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-513852 not found
	I1212 00:12:06.649083 1117956 network_create.go:286] output of [docker network inspect addons-513852]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-513852 not found
	
	** /stderr **
	I1212 00:12:06.649195 1117956 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:12:06.666526 1117956 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024fccb0}
	I1212 00:12:06.666562 1117956 network_create.go:124] attempt to create docker network addons-513852 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 00:12:06.666624 1117956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-513852 addons-513852
	I1212 00:12:06.736540 1117956 network_create.go:108] docker network addons-513852 192.168.49.0/24 created
	I1212 00:12:06.736572 1117956 kic.go:121] calculated static IP "192.168.49.2" for the "addons-513852" container
	I1212 00:12:06.736641 1117956 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:12:06.752938 1117956 cli_runner.go:164] Run: docker volume create addons-513852 --label name.minikube.sigs.k8s.io=addons-513852 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:12:06.770605 1117956 oci.go:103] Successfully created a docker volume addons-513852
	I1212 00:12:06.770689 1117956 cli_runner.go:164] Run: docker run --rm --name addons-513852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-513852 --entrypoint /usr/bin/test -v addons-513852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib
	I1212 00:12:08.567386 1117956 cli_runner.go:217] Completed: docker run --rm --name addons-513852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-513852 --entrypoint /usr/bin/test -v addons-513852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib: (1.79663978s)
	I1212 00:12:08.567417 1117956 oci.go:107] Successfully prepared a docker volume addons-513852
	I1212 00:12:08.567450 1117956 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:12:08.567475 1117956 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:12:08.567548 1117956 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-513852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:12:12.755902 1117956 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-513852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir: (4.188315546s)
	I1212 00:12:12.755932 1117956 kic.go:203] duration metric: took 4.188461 seconds to extract preloaded images to volume
	W1212 00:12:12.756070 1117956 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:12:12.756212 1117956 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:12:12.824907 1117956 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-513852 --name addons-513852 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-513852 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-513852 --network addons-513852 --ip 192.168.49.2 --volume addons-513852:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 00:12:13.179806 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Running}}
	I1212 00:12:13.200941 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:13.229388 1117956 cli_runner.go:164] Run: docker exec addons-513852 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:12:13.309684 1117956 oci.go:144] the created container "addons-513852" has a running status.
	I1212 00:12:13.309719 1117956 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa...
	I1212 00:12:13.816261 1117956 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:12:13.847239 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:13.874214 1117956 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:12:13.874233 1117956 kic_runner.go:114] Args: [docker exec --privileged addons-513852 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:12:13.947728 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:13.975467 1117956 machine.go:88] provisioning docker machine ...
	I1212 00:12:13.975496 1117956 ubuntu.go:169] provisioning hostname "addons-513852"
	I1212 00:12:13.975561 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:14.000966 1117956 main.go:141] libmachine: Using SSH client type: native
	I1212 00:12:14.001503 1117956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34010 <nil> <nil>}
	I1212 00:12:14.001526 1117956 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-513852 && echo "addons-513852" | sudo tee /etc/hostname
	I1212 00:12:14.207917 1117956 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-513852
	
	I1212 00:12:14.207993 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:14.234038 1117956 main.go:141] libmachine: Using SSH client type: native
	I1212 00:12:14.234444 1117956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34010 <nil> <nil>}
	I1212 00:12:14.234466 1117956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-513852' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-513852/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-513852' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:12:14.382990 1117956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:12:14.383029 1117956 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 00:12:14.383048 1117956 ubuntu.go:177] setting up certificates
	I1212 00:12:14.383056 1117956 provision.go:83] configureAuth start
	I1212 00:12:14.383123 1117956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-513852
	I1212 00:12:14.408999 1117956 provision.go:138] copyHostCerts
	I1212 00:12:14.409069 1117956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 00:12:14.409207 1117956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 00:12:14.409407 1117956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 00:12:14.409485 1117956 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.addons-513852 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-513852]
	I1212 00:12:14.645539 1117956 provision.go:172] copyRemoteCerts
	I1212 00:12:14.645619 1117956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:12:14.645665 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:14.663953 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:14.768200 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:12:14.798848 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 00:12:14.826637 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:12:14.855015 1117956 provision.go:86] duration metric: configureAuth took 471.944905ms
	I1212 00:12:14.855051 1117956 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:12:14.855241 1117956 config.go:182] Loaded profile config "addons-513852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:12:14.855355 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:14.873699 1117956 main.go:141] libmachine: Using SSH client type: native
	I1212 00:12:14.874123 1117956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34010 <nil> <nil>}
	I1212 00:12:14.874153 1117956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:12:15.141653 1117956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:12:15.141736 1117956 machine.go:91] provisioned docker machine in 1.166248968s
	I1212 00:12:15.141760 1117956 client.go:171] LocalClient.Create took 9.224530447s
	I1212 00:12:15.141797 1117956 start.go:167] duration metric: libmachine.API.Create for "addons-513852" took 9.224606777s
	I1212 00:12:15.141806 1117956 start.go:300] post-start starting for "addons-513852" (driver="docker")
	I1212 00:12:15.141816 1117956 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:12:15.141896 1117956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:12:15.141944 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:15.161093 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:15.264006 1117956 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:12:15.268211 1117956 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:12:15.268247 1117956 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:12:15.268262 1117956 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:12:15.268269 1117956 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:12:15.268278 1117956 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 00:12:15.268341 1117956 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 00:12:15.268371 1117956 start.go:303] post-start completed in 126.559429ms
	I1212 00:12:15.268691 1117956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-513852
	I1212 00:12:15.286818 1117956 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/config.json ...
	I1212 00:12:15.287102 1117956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:12:15.287156 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:15.304557 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:15.399081 1117956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:12:15.404353 1117956 start.go:128] duration metric: createHost completed in 9.489689455s
	I1212 00:12:15.404378 1117956 start.go:83] releasing machines lock for "addons-513852", held for 9.489840737s
	I1212 00:12:15.404441 1117956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-513852
	I1212 00:12:15.421683 1117956 ssh_runner.go:195] Run: cat /version.json
	I1212 00:12:15.421739 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:15.421808 1117956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:12:15.421863 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:15.448504 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:15.449135 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:15.674311 1117956 ssh_runner.go:195] Run: systemctl --version
	I1212 00:12:15.679840 1117956 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:12:15.824649 1117956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:12:15.830080 1117956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:12:15.852154 1117956 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:12:15.852231 1117956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:12:15.892011 1117956 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 00:12:15.892032 1117956 start.go:475] detecting cgroup driver to use...
	I1212 00:12:15.892062 1117956 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:12:15.892116 1117956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:12:15.909782 1117956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:12:15.923668 1117956 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:12:15.923785 1117956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:12:15.939423 1117956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:12:15.956047 1117956 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:12:16.056533 1117956 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:12:16.159154 1117956 docker.go:219] disabling docker service ...
	I1212 00:12:16.159230 1117956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:12:16.180291 1117956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:12:16.193831 1117956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:12:16.289274 1117956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:12:16.403568 1117956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:12:16.417192 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:12:16.436268 1117956 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 00:12:16.436360 1117956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:12:16.448285 1117956 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:12:16.448368 1117956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:12:16.460251 1117956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:12:16.472053 1117956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:12:16.483999 1117956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:12:16.494708 1117956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:12:16.504753 1117956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:12:16.515203 1117956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:12:16.609839 1117956 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:12:16.736465 1117956 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:12:16.736548 1117956 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:12:16.741081 1117956 start.go:543] Will wait 60s for crictl version
	I1212 00:12:16.741186 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:12:16.745743 1117956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:12:16.790478 1117956 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 00:12:16.790580 1117956 ssh_runner.go:195] Run: crio --version
	I1212 00:12:16.832748 1117956 ssh_runner.go:195] Run: crio --version
	I1212 00:12:16.877873 1117956 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1212 00:12:16.879736 1117956 cli_runner.go:164] Run: docker network inspect addons-513852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:12:16.897288 1117956 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:12:16.901669 1117956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:12:16.914680 1117956 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:12:16.914747 1117956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:12:16.982602 1117956 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 00:12:16.982624 1117956 crio.go:415] Images already preloaded, skipping extraction
	I1212 00:12:16.982680 1117956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:12:17.029868 1117956 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 00:12:17.029891 1117956 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:12:17.029970 1117956 ssh_runner.go:195] Run: crio config
	I1212 00:12:17.088349 1117956 cni.go:84] Creating CNI manager for ""
	I1212 00:12:17.088373 1117956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:12:17.088404 1117956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:12:17.088428 1117956 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-513852 NodeName:addons-513852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:12:17.088570 1117956 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-513852"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:12:17.088649 1117956 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-513852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-513852 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 00:12:17.088718 1117956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:12:17.099406 1117956 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:12:17.099488 1117956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:12:17.110021 1117956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1212 00:12:17.130590 1117956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:12:17.151380 1117956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1212 00:12:17.172094 1117956 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:12:17.176367 1117956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:12:17.189616 1117956 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852 for IP: 192.168.49.2
	I1212 00:12:17.189651 1117956 certs.go:190] acquiring lock for shared ca certs: {Name:mk50788b4819ee46b65351495e43cdf246a6ddce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:17.189813 1117956 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key
	I1212 00:12:17.471046 1117956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt ...
	I1212 00:12:17.471077 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt: {Name:mk63f7231b362eb36ee624ca1d988a5c0eeb54ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:17.471271 1117956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key ...
	I1212 00:12:17.471284 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key: {Name:mk001a7dec35b6cd75317cfa0518572d810733b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:17.472052 1117956 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key
	I1212 00:12:18.072946 1117956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt ...
	I1212 00:12:18.072985 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt: {Name:mk5adb4c4a83191ec01fbd158f8e2301c5b4e380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:18.073187 1117956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key ...
	I1212 00:12:18.073202 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key: {Name:mk83486da479f56678dc25ea9891063a949213c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:18.073359 1117956 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.key
	I1212 00:12:18.073385 1117956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt with IP's: []
	I1212 00:12:18.203736 1117956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt ...
	I1212 00:12:18.203766 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: {Name:mk2ddad058277b67b414650caa9775d45cf301f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:18.203953 1117956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.key ...
	I1212 00:12:18.203978 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.key: {Name:mkc82ca82fb5fe9dc3da535893414833cbeb9830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:18.204083 1117956 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key.dd3b5fb2
	I1212 00:12:18.204103 1117956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 00:12:19.377813 1117956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt.dd3b5fb2 ...
	I1212 00:12:19.377847 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt.dd3b5fb2: {Name:mke601a62b20dc2e283b96952577fc54ee9e8063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:19.378033 1117956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key.dd3b5fb2 ...
	I1212 00:12:19.378047 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key.dd3b5fb2: {Name:mk1924c0862ca7e851aef86a9d35758f0682eae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:19.378132 1117956 certs.go:337] copying /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt
	I1212 00:12:19.378236 1117956 certs.go:341] copying /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key
	I1212 00:12:19.378288 1117956 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.key
	I1212 00:12:19.378314 1117956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.crt with IP's: []
	I1212 00:12:19.582042 1117956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.crt ...
	I1212 00:12:19.582073 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.crt: {Name:mk3734d85f87c17800a9550539cf823d1b1562fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:19.582279 1117956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.key ...
	I1212 00:12:19.582293 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.key: {Name:mk4265909e2109577a4034bebd2d8e7075db6fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:19.582496 1117956 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:12:19.582548 1117956 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:12:19.582578 1117956 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:12:19.582609 1117956 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem (1679 bytes)
	I1212 00:12:19.583214 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:12:19.611890 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:12:19.639499 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:12:19.667412 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:12:19.695169 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:12:19.721927 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:12:19.749756 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:12:19.777031 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:12:19.804303 1117956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:12:19.831862 1117956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:12:19.852550 1117956 ssh_runner.go:195] Run: openssl version
	I1212 00:12:19.859270 1117956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:12:19.870709 1117956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:12:19.875024 1117956 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:12:19.875099 1117956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:12:19.883356 1117956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:12:19.894678 1117956 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:12:19.898800 1117956 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 00:12:19.898847 1117956 kubeadm.go:404] StartCluster: {Name:addons-513852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-513852 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:12:19.898926 1117956 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:12:19.898996 1117956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:12:19.941002 1117956 cri.go:89] found id: ""
	I1212 00:12:19.941121 1117956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:12:19.951581 1117956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:12:19.962004 1117956 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:12:19.962068 1117956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:12:19.972318 1117956 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:12:19.972391 1117956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:12:20.031248 1117956 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 00:12:20.031564 1117956 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 00:12:20.076698 1117956 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:12:20.076787 1117956 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1212 00:12:20.076840 1117956 kubeadm.go:322] OS: Linux
	I1212 00:12:20.076889 1117956 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 00:12:20.076939 1117956 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 00:12:20.076986 1117956 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 00:12:20.077035 1117956 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 00:12:20.077084 1117956 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 00:12:20.077134 1117956 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 00:12:20.077181 1117956 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1212 00:12:20.077228 1117956 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1212 00:12:20.077288 1117956 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1212 00:12:20.166769 1117956 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:12:20.167349 1117956 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:12:20.167491 1117956 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:12:20.414887 1117956 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:12:20.417171 1117956 out.go:204]   - Generating certificates and keys ...
	I1212 00:12:20.417332 1117956 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 00:12:20.417416 1117956 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 00:12:20.776471 1117956 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:12:21.319332 1117956 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:12:21.838896 1117956 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:12:22.401007 1117956 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 00:12:23.123073 1117956 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 00:12:23.123477 1117956 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-513852 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:12:24.014299 1117956 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 00:12:24.014670 1117956 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-513852 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:12:24.725411 1117956 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:12:24.876312 1117956 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:12:25.595368 1117956 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 00:12:25.595685 1117956 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:12:25.935577 1117956 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:12:26.574571 1117956 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:12:27.215100 1117956 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:12:27.515971 1117956 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:12:27.516832 1117956 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:12:27.519493 1117956 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:12:27.521899 1117956 out.go:204]   - Booting up control plane ...
	I1212 00:12:27.522017 1117956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:12:27.522091 1117956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:12:27.522798 1117956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:12:27.535118 1117956 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:12:27.535908 1117956 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:12:27.536186 1117956 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 00:12:27.634330 1117956 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:12:34.137227 1117956 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502237 seconds
	I1212 00:12:34.137370 1117956 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:12:34.153363 1117956 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:12:34.678099 1117956 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:12:34.678299 1117956 kubeadm.go:322] [mark-control-plane] Marking the node addons-513852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:12:35.189766 1117956 kubeadm.go:322] [bootstrap-token] Using token: dlqiuc.q2dtcr4gd8ieq310
	I1212 00:12:35.191953 1117956 out.go:204]   - Configuring RBAC rules ...
	I1212 00:12:35.192069 1117956 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:12:35.196771 1117956 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:12:35.206364 1117956 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:12:35.210211 1117956 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:12:35.214010 1117956 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:12:35.218315 1117956 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:12:35.233026 1117956 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:12:35.498416 1117956 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 00:12:35.633813 1117956 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 00:12:35.634883 1117956 kubeadm.go:322] 
	I1212 00:12:35.634950 1117956 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 00:12:35.634956 1117956 kubeadm.go:322] 
	I1212 00:12:35.635028 1117956 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 00:12:35.635033 1117956 kubeadm.go:322] 
	I1212 00:12:35.635058 1117956 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 00:12:35.635113 1117956 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:12:35.635161 1117956 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:12:35.635166 1117956 kubeadm.go:322] 
	I1212 00:12:35.635223 1117956 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 00:12:35.635229 1117956 kubeadm.go:322] 
	I1212 00:12:35.635274 1117956 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:12:35.635278 1117956 kubeadm.go:322] 
	I1212 00:12:35.635327 1117956 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 00:12:35.635397 1117956 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:12:35.635461 1117956 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:12:35.635468 1117956 kubeadm.go:322] 
	I1212 00:12:35.635547 1117956 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:12:35.635619 1117956 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 00:12:35.635624 1117956 kubeadm.go:322] 
	I1212 00:12:35.635702 1117956 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dlqiuc.q2dtcr4gd8ieq310 \
	I1212 00:12:35.635799 1117956 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 \
	I1212 00:12:35.635819 1117956 kubeadm.go:322] 	--control-plane 
	I1212 00:12:35.635824 1117956 kubeadm.go:322] 
	I1212 00:12:35.635903 1117956 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:12:35.635911 1117956 kubeadm.go:322] 
	I1212 00:12:35.635988 1117956 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dlqiuc.q2dtcr4gd8ieq310 \
	I1212 00:12:35.636084 1117956 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 
	I1212 00:12:35.640258 1117956 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1212 00:12:35.640374 1117956 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:12:35.640507 1117956 cni.go:84] Creating CNI manager for ""
	I1212 00:12:35.640536 1117956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:12:35.644576 1117956 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:12:35.646621 1117956 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:12:35.662664 1117956 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:12:35.662683 1117956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:12:35.708331 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:12:36.568292 1117956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:12:36.568461 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:36.568556 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4 minikube.k8s.io/name=addons-513852 minikube.k8s.io/updated_at=2023_12_12T00_12_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:36.726594 1117956 ops.go:34] apiserver oom_adj: -16
	I1212 00:12:36.726716 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:36.831700 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:37.425395 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:37.925775 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:38.425570 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:38.925267 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:39.425165 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:39.925629 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:40.425231 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:40.925207 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:41.425401 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:41.926084 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:42.425138 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:42.925111 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:43.425419 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:43.925666 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:44.425796 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:44.925133 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:45.426010 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:45.926037 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:46.425388 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:46.925638 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:47.425599 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:47.925511 1117956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:12:48.070642 1117956 kubeadm.go:1088] duration metric: took 11.502239293s to wait for elevateKubeSystemPrivileges.
	I1212 00:12:48.070673 1117956 kubeadm.go:406] StartCluster complete in 28.17182842s
	I1212 00:12:48.070690 1117956 settings.go:142] acquiring lock: {Name:mk4639df610f4394c6679c82a1803a108086063e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:48.071250 1117956 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:12:48.071631 1117956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/kubeconfig: {Name:mk6bda1f8356012618f11e41d531a3f786e443d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:12:48.072867 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:12:48.073159 1117956 config.go:182] Loaded profile config "addons-513852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:12:48.073310 1117956 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1212 00:12:48.073374 1117956 addons.go:69] Setting volumesnapshots=true in profile "addons-513852"
	I1212 00:12:48.073390 1117956 addons.go:231] Setting addon volumesnapshots=true in "addons-513852"
	I1212 00:12:48.073442 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.073902 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.075151 1117956 addons.go:69] Setting ingress-dns=true in profile "addons-513852"
	I1212 00:12:48.075187 1117956 addons.go:231] Setting addon ingress-dns=true in "addons-513852"
	I1212 00:12:48.075232 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.075666 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.078271 1117956 addons.go:69] Setting inspektor-gadget=true in profile "addons-513852"
	I1212 00:12:48.078308 1117956 addons.go:231] Setting addon inspektor-gadget=true in "addons-513852"
	I1212 00:12:48.078357 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.078792 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.079167 1117956 addons.go:69] Setting cloud-spanner=true in profile "addons-513852"
	I1212 00:12:48.079189 1117956 addons.go:231] Setting addon cloud-spanner=true in "addons-513852"
	I1212 00:12:48.079230 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.079621 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.080752 1117956 addons.go:69] Setting metrics-server=true in profile "addons-513852"
	I1212 00:12:48.080784 1117956 addons.go:231] Setting addon metrics-server=true in "addons-513852"
	I1212 00:12:48.080819 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.082527 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.085987 1117956 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-513852"
	I1212 00:12:48.086054 1117956 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-513852"
	I1212 00:12:48.086095 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.086547 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.087006 1117956 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-513852"
	I1212 00:12:48.087030 1117956 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-513852"
	I1212 00:12:48.087071 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.087485 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.093064 1117956 addons.go:69] Setting registry=true in profile "addons-513852"
	I1212 00:12:48.093101 1117956 addons.go:231] Setting addon registry=true in "addons-513852"
	I1212 00:12:48.093148 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.093648 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.099799 1117956 addons.go:69] Setting default-storageclass=true in profile "addons-513852"
	I1212 00:12:48.099844 1117956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-513852"
	I1212 00:12:48.100207 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.110183 1117956 addons.go:69] Setting storage-provisioner=true in profile "addons-513852"
	I1212 00:12:48.110222 1117956 addons.go:231] Setting addon storage-provisioner=true in "addons-513852"
	I1212 00:12:48.110266 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.110712 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.124211 1117956 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-513852"
	I1212 00:12:48.124252 1117956 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-513852"
	I1212 00:12:48.124588 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.126828 1117956 addons.go:69] Setting gcp-auth=true in profile "addons-513852"
	I1212 00:12:48.126862 1117956 mustload.go:65] Loading cluster: addons-513852
	I1212 00:12:48.127057 1117956 config.go:182] Loaded profile config "addons-513852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:12:48.127401 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.141927 1117956 addons.go:69] Setting ingress=true in profile "addons-513852"
	I1212 00:12:48.141966 1117956 addons.go:231] Setting addon ingress=true in "addons-513852"
	I1212 00:12:48.142025 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.142483 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.255134 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 00:12:48.257186 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 00:12:48.257206 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 00:12:48.257317 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.293347 1117956 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1212 00:12:48.297531 1117956 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 00:12:48.297589 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1212 00:12:48.297669 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.304868 1117956 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1212 00:12:48.306784 1117956 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 00:12:48.306803 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 00:12:48.306867 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.315959 1117956 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1212 00:12:48.319841 1117956 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1212 00:12:48.320023 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 00:12:48.320120 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.354977 1117956 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1212 00:12:48.358041 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1212 00:12:48.358089 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1212 00:12:48.358176 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.374326 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 00:12:48.383587 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 00:12:48.386638 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 00:12:48.391096 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 00:12:48.385820 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:12:48.374567 1117956 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1212 00:12:48.374573 1117956 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:12:48.382178 1117956 addons.go:231] Setting addon default-storageclass=true in "addons-513852"
	I1212 00:12:48.385901 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.396878 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 00:12:48.398393 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 00:12:48.399944 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 00:12:48.398784 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.399620 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.398234 1117956 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-513852"
	I1212 00:12:48.401765 1117956 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1212 00:12:48.409655 1117956 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 00:12:48.411494 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 00:12:48.411510 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 00:12:48.411564 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.409580 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.410533 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:48.452998 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:48.462865 1117956 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-513852" context rescaled to 1 replicas
	I1212 00:12:48.462900 1117956 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:12:48.410624 1117956 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:12:48.410946 1117956 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 00:12:48.467218 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 00:12:48.467294 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.477553 1117956 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 00:12:48.474647 1117956 out.go:177] * Verifying Kubernetes components...
	I1212 00:12:48.474664 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:12:48.483621 1117956 out.go:177]   - Using image docker.io/registry:2.8.3
	I1212 00:12:48.481654 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.499718 1117956 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1212 00:12:48.497492 1117956 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 00:12:48.497553 1117956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:12:48.497634 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.498482 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.504363 1117956 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 00:12:48.504379 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1212 00:12:48.504489 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.524243 1117956 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:12:48.524279 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1212 00:12:48.524390 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.539107 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.557499 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.615104 1117956 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:12:48.615125 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:12:48.615185 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.638693 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.673175 1117956 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 00:12:48.674776 1117956 out.go:177]   - Using image docker.io/busybox:stable
	I1212 00:12:48.682657 1117956 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 00:12:48.682682 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 00:12:48.682745 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:48.674016 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.674951 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.713497 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.730748 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.748403 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.758481 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:48.804133 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1212 00:12:48.804155 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1212 00:12:48.904899 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1212 00:12:48.904923 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1212 00:12:48.940403 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 00:12:49.035446 1117956 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 00:12:49.035507 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 00:12:49.039737 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1212 00:12:49.039772 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1212 00:12:49.046405 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 00:12:49.083871 1117956 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 00:12:49.083935 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 00:12:49.095572 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 00:12:49.128273 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:12:49.138307 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 00:12:49.138368 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 00:12:49.194599 1117956 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 00:12:49.194672 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 00:12:49.203105 1117956 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 00:12:49.203166 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 00:12:49.248661 1117956 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 00:12:49.248699 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 00:12:49.259141 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1212 00:12:49.259168 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1212 00:12:49.267504 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:12:49.280583 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:12:49.347706 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 00:12:49.347731 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 00:12:49.364875 1117956 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 00:12:49.364898 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 00:12:49.368016 1117956 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 00:12:49.368038 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 00:12:49.374538 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 00:12:49.463648 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1212 00:12:49.463681 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1212 00:12:49.468569 1117956 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 00:12:49.468592 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 00:12:49.547922 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 00:12:49.547946 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 00:12:49.567320 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 00:12:49.573714 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 00:12:49.573746 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 00:12:49.693329 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 00:12:49.727590 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1212 00:12:49.727615 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1212 00:12:49.775466 1117956 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:12:49.775496 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 00:12:49.805209 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 00:12:49.805239 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 00:12:49.925543 1117956 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 00:12:49.925567 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1212 00:12:49.958326 1117956 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 00:12:49.958353 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 00:12:49.973296 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:12:50.097361 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 00:12:50.104837 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 00:12:50.104863 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 00:12:50.206610 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 00:12:50.206635 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 00:12:50.340267 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 00:12:50.340297 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 00:12:50.403923 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 00:12:50.403994 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 00:12:50.494557 1117956 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 00:12:50.494628 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 00:12:50.565874 1117956 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.166966289s)
	I1212 00:12:50.565952 1117956 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 00:12:50.566026 1117956 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.017098679s)
	I1212 00:12:50.566868 1117956 node_ready.go:35] waiting up to 6m0s for node "addons-513852" to be "Ready" ...
	I1212 00:12:50.574278 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 00:12:52.704278 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.763830059s)
	I1212 00:12:52.725208 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.678769569s)
	I1212 00:12:52.725359 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.629715078s)
	I1212 00:12:52.948177 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:12:54.132790 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.004432937s)
	I1212 00:12:54.132972 1117956 addons.go:467] Verifying addon ingress=true in "addons-513852"
	I1212 00:12:54.133030 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.758462506s)
	I1212 00:12:54.135486 1117956 out.go:177] * Verifying ingress addon...
	I1212 00:12:54.133277 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.565929417s)
	I1212 00:12:54.135575 1117956 addons.go:467] Verifying addon registry=true in "addons-513852"
	I1212 00:12:54.141300 1117956 out.go:177] * Verifying registry addon...
	I1212 00:12:54.139430 1117956 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 00:12:54.133448 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.160114301s)
	I1212 00:12:54.133494 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.036104767s)
	I1212 00:12:54.132885 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.865357489s)
	I1212 00:12:54.132950 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.852344232s)
	I1212 00:12:54.133364 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.440003321s)
	I1212 00:12:54.143179 1117956 addons.go:467] Verifying addon metrics-server=true in "addons-513852"
	W1212 00:12:54.143361 1117956 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 00:12:54.143378 1117956 retry.go:31] will retry after 372.771501ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 00:12:54.143910 1117956 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 00:12:54.154778 1117956 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 00:12:54.154858 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:54.161841 1117956 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 00:12:54.161913 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:54.164952 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:54.170396 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1212 00:12:54.172381 1117956 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1212 00:12:54.355213 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.780842585s)
	I1212 00:12:54.355286 1117956 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-513852"
	I1212 00:12:54.358766 1117956 out.go:177] * Verifying csi-hostpath-driver addon...
	I1212 00:12:54.361504 1117956 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 00:12:54.374314 1117956 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 00:12:54.374382 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:54.378237 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:54.516927 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:12:54.681267 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:54.682444 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:54.883887 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:55.179319 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:55.181377 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:55.341377 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:12:55.383243 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:55.551335 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.034359297s)
	I1212 00:12:55.669675 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:55.674681 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:55.884733 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:56.170320 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:56.174975 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:56.261698 1117956 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 00:12:56.261813 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:56.279702 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:56.382704 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:56.397855 1117956 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 00:12:56.422889 1117956 addons.go:231] Setting addon gcp-auth=true in "addons-513852"
	I1212 00:12:56.422956 1117956 host.go:66] Checking if "addons-513852" exists ...
	I1212 00:12:56.423454 1117956 cli_runner.go:164] Run: docker container inspect addons-513852 --format={{.State.Status}}
	I1212 00:12:56.452694 1117956 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 00:12:56.452752 1117956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-513852
	I1212 00:12:56.492937 1117956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34010 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/addons-513852/id_rsa Username:docker}
	I1212 00:12:56.651469 1117956 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 00:12:56.653121 1117956 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1212 00:12:56.655090 1117956 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 00:12:56.655110 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 00:12:56.670974 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:56.674215 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:56.730040 1117956 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 00:12:56.730066 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 00:12:56.776167 1117956 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 00:12:56.776191 1117956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1212 00:12:56.819061 1117956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 00:12:56.883116 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:57.170393 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:57.176333 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:57.342215 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:12:57.382849 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:57.670181 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:57.674843 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:57.940346 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:58.196930 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:58.200946 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:58.270812 1117956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.451712747s)
	I1212 00:12:58.273307 1117956 addons.go:467] Verifying addon gcp-auth=true in "addons-513852"
	I1212 00:12:58.275820 1117956 out.go:177] * Verifying gcp-auth addon...
	I1212 00:12:58.278581 1117956 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 00:12:58.291328 1117956 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 00:12:58.291356 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:58.301862 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:58.397849 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:58.670087 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:58.674612 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:58.806179 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:58.883993 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:59.170132 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.174591 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:59.306402 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:59.383080 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:59.670305 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.674011 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:59.805919 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:59.841787 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:12:59.882739 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:00.175306 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:00.176447 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:00.305994 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:00.383987 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:00.670494 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:00.674208 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:00.806037 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:00.883700 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.173614 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:01.175998 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:01.307609 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:01.383477 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.669929 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:01.674715 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:01.805920 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:01.850060 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:01.883006 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:02.170733 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:02.177919 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:02.306029 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:02.383521 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:02.670286 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:02.674839 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:02.805094 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:02.883646 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:03.169933 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:03.174624 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:03.305649 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:03.382626 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:03.669913 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:03.674879 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:03.805941 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:03.883179 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:04.169396 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:04.174222 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:04.305344 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:04.341763 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:04.382985 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:04.669990 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:04.674731 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:04.805299 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:04.883351 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:05.170478 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:05.174330 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:05.305538 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:05.382638 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:05.669895 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:05.674382 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:05.805603 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:05.884203 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:06.174368 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:06.175101 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:06.305230 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:06.382461 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:06.669479 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:06.674055 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:06.805574 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:06.841384 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:06.882473 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:07.170045 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:07.175067 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:07.305535 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:07.382846 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:07.670361 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:07.674704 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:07.805055 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:07.882879 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:08.170384 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:08.175023 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:08.306259 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:08.383156 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:08.669954 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:08.674629 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:08.806181 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:08.841570 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:08.882812 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:09.169421 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:09.174344 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:09.305827 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:09.383468 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:09.669556 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:09.674358 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:09.805692 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:09.883399 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:10.171636 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:10.174529 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:10.305958 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:10.383427 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:10.669883 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:10.674790 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:10.805405 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:10.884335 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:11.169634 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:11.174326 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:11.305450 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:11.341355 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:11.382669 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:11.670095 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:11.674828 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:11.805988 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:11.883112 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:12.169405 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:12.174107 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:12.305208 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:12.382717 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:12.670289 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:12.675086 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:12.805465 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:12.882933 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:13.169536 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:13.174455 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:13.306124 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:13.383443 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:13.669756 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:13.674474 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:13.805647 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:13.841393 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:13.888331 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:14.169913 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:14.174488 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:14.305961 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:14.382720 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:14.670190 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:14.674868 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:14.805432 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:14.882605 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:15.169921 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:15.174983 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:15.306092 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:15.382665 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:15.670086 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:15.675284 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:15.805438 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:15.883164 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:16.170189 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:16.174943 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:16.306074 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:16.341425 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:16.385971 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:16.670569 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:16.674017 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:16.806086 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:16.883062 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:17.169462 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:17.174313 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:17.310265 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:17.382346 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:17.670137 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:17.674845 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:17.805227 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:17.883477 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:18.170506 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:18.174135 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:18.305809 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:18.383139 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:18.669725 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:18.674351 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:18.805444 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:18.841617 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:18.883142 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:19.169970 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:19.174583 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:19.305941 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:19.382377 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:19.669841 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:19.674495 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:19.806069 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:19.882758 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:20.169417 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:20.174066 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:20.306292 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:20.383059 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:20.669452 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:20.673992 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:20.806284 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:20.842952 1117956 node_ready.go:58] node "addons-513852" has status "Ready":"False"
	I1212 00:13:20.890366 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:21.170254 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:21.175278 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:21.320440 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:21.371746 1117956 node_ready.go:49] node "addons-513852" has status "Ready":"True"
	I1212 00:13:21.371807 1117956 node_ready.go:38] duration metric: took 30.804869695s waiting for node "addons-513852" to be "Ready" ...
	I1212 00:13:21.371845 1117956 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:13:21.395414 1117956 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gvfh4" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:21.398446 1117956 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 00:13:21.398517 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:21.746659 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:21.748275 1117956 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 00:13:21.748343 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:21.826103 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:21.913645 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:22.191781 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:22.202100 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:22.308339 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:22.385332 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:22.673688 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:22.676426 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:22.806261 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:22.886604 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:22.975567 1117956 pod_ready.go:92] pod "coredns-5dd5756b68-gvfh4" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:22.975596 1117956 pod_ready.go:81] duration metric: took 1.580101524s waiting for pod "coredns-5dd5756b68-gvfh4" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:22.975614 1117956 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.002469 1117956 pod_ready.go:92] pod "etcd-addons-513852" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:23.002500 1117956 pod_ready.go:81] duration metric: took 26.879184ms waiting for pod "etcd-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.002516 1117956 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.015027 1117956 pod_ready.go:92] pod "kube-apiserver-addons-513852" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:23.015054 1117956 pod_ready.go:81] duration metric: took 12.528563ms waiting for pod "kube-apiserver-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.015067 1117956 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.021044 1117956 pod_ready.go:92] pod "kube-controller-manager-addons-513852" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:23.021069 1117956 pod_ready.go:81] duration metric: took 5.99407ms waiting for pod "kube-controller-manager-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.021083 1117956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8kkgn" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.170783 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:23.180641 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:23.305630 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:23.342713 1117956 pod_ready.go:92] pod "kube-proxy-8kkgn" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:23.342737 1117956 pod_ready.go:81] duration metric: took 321.646074ms waiting for pod "kube-proxy-8kkgn" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.342750 1117956 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.384059 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:23.669761 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:23.678381 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:23.745460 1117956 pod_ready.go:92] pod "kube-scheduler-addons-513852" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:23.745533 1117956 pod_ready.go:81] duration metric: took 402.775007ms waiting for pod "kube-scheduler-addons-513852" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.745560 1117956 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:23.806559 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:23.895182 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:24.177836 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:24.180486 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:24.306177 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:24.385746 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:24.671870 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:24.677237 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:24.810007 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:24.884830 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:25.171907 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:25.200541 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:25.306505 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:25.385729 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:25.671151 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:25.677816 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:25.809076 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:25.905072 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:26.049731 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:26.170243 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:26.175695 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:26.305222 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:26.384208 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:26.669661 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:26.675015 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:26.808378 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:26.883746 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:27.170664 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:27.176113 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:27.306370 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:27.390019 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:27.671856 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:27.680769 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:27.807107 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:27.884326 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:28.050874 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:28.170592 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:28.174575 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:28.306108 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:28.388625 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:28.670652 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:28.678889 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:28.805495 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:28.884389 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:29.170458 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:29.174894 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:29.305513 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:29.383586 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:29.670922 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:29.676453 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:29.806302 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:29.885524 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:30.051297 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:30.171706 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:30.176082 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:30.306477 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:30.384297 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:30.670972 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:30.677876 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:30.807162 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:30.887599 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:31.170576 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:31.174808 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:31.305224 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:31.383621 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:31.669818 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:31.675295 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:31.805906 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:31.884466 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:32.170263 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:32.175315 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:32.310138 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:32.384752 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:32.549505 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:32.669976 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:32.675724 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:32.806342 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:32.884023 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:33.171032 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:33.178521 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:33.306442 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:33.384454 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:33.670597 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:33.674730 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:33.805547 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:33.883763 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:34.183630 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:34.184545 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:34.307143 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:34.385034 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:34.670919 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:34.676923 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:34.805475 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:34.885086 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:35.053055 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:35.171032 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:35.175309 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:35.305664 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:35.384265 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:35.670659 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:35.675159 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:35.806203 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:35.883595 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:36.170313 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:36.176016 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:36.307815 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:36.386073 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:36.671065 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:36.676182 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:36.807106 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:36.885406 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:37.180564 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:37.183283 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:37.306056 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:37.385779 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:37.550139 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:37.671583 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:37.683599 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:37.806205 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:37.887977 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:38.177186 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:38.184178 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:38.310155 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:38.385530 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:38.670303 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:38.676577 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:38.806166 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:38.884125 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:39.171047 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:39.175437 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:39.305449 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:39.384305 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:39.550922 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:39.670673 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:39.675157 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:39.807965 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:39.885495 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:40.172004 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:40.176908 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:40.305910 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:40.384948 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:40.678671 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:40.680603 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:40.807761 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:40.884880 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:41.171579 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:41.182219 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:41.306157 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:41.388291 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:41.674616 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:41.679562 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:41.806812 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:41.885043 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:42.049617 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:42.187791 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:42.192243 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:42.306562 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:42.385714 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:42.674339 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:42.678933 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:42.806165 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:42.890816 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:43.178736 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:43.196631 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:43.306683 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:43.390817 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:43.671111 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:43.676559 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:43.807650 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:43.884947 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:44.061475 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:44.172988 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:44.177608 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:44.308637 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:44.385829 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:44.674664 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:44.679239 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:44.807128 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:44.907038 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:45.172154 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:45.177794 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:45.306425 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:45.385790 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:45.669964 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:45.675512 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:45.807227 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:45.883958 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:46.170910 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:46.176337 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:46.306840 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:46.384496 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:46.553413 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:46.670422 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:46.675384 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:46.805536 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:46.885359 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:47.170211 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:47.175763 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:47.305716 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:47.390525 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:47.669790 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:47.675248 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:47.810683 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:47.884092 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:48.170269 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:48.175696 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:48.305309 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:48.383577 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:48.553983 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:48.671279 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:48.676515 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:48.806186 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:48.884441 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:49.170764 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:49.176235 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:49.307529 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:49.392910 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:49.670768 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:49.674914 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:49.805595 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:49.884811 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:50.169902 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:50.175249 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:50.308634 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:50.384592 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:50.670541 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:50.674811 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:50.805392 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:50.884978 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:51.049720 1117956 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:51.169605 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:51.174809 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:51.305922 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:51.387970 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:51.669810 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:51.675192 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:51.805935 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:51.884992 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:52.057578 1117956 pod_ready.go:92] pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:52.057652 1117956 pod_ready.go:81] duration metric: took 28.312070605s waiting for pod "metrics-server-7c66d45ddc-q8k8b" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:52.057677 1117956 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ssl96" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:52.174457 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:52.197142 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:52.305762 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:52.384615 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:52.670827 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:52.675047 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:52.806336 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:52.884454 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:53.171131 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:53.177405 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:53.306838 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:53.389687 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:53.671272 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:53.677627 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:53.807068 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:53.885525 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:54.105174 1117956 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ssl96" in "kube-system" namespace has status "Ready":"False"
	I1212 00:13:54.170972 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:54.189756 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:54.307686 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:54.385105 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:54.670420 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:54.679154 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:54.809069 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:54.890068 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:55.170352 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:55.175551 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:55.319343 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:55.383970 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:55.604819 1117956 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ssl96" in "kube-system" namespace has status "Ready":"True"
	I1212 00:13:55.604847 1117956 pod_ready.go:81] duration metric: took 3.547149599s waiting for pod "nvidia-device-plugin-daemonset-ssl96" in "kube-system" namespace to be "Ready" ...
	I1212 00:13:55.604894 1117956 pod_ready.go:38] duration metric: took 34.232993104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:13:55.604915 1117956 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:13:55.604942 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:13:55.605015 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:13:55.652022 1117956 cri.go:89] found id: "171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:13:55.652089 1117956 cri.go:89] found id: ""
	I1212 00:13:55.652104 1117956 logs.go:284] 1 containers: [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050]
	I1212 00:13:55.652167 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.656427 1117956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:13:55.656514 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:13:55.669914 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:55.675567 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:55.702060 1117956 cri.go:89] found id: "ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:13:55.702088 1117956 cri.go:89] found id: ""
	I1212 00:13:55.702096 1117956 logs.go:284] 1 containers: [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228]
	I1212 00:13:55.702156 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.706709 1117956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:13:55.706828 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:13:55.752511 1117956 cri.go:89] found id: "14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:13:55.752534 1117956 cri.go:89] found id: ""
	I1212 00:13:55.752542 1117956 logs.go:284] 1 containers: [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a]
	I1212 00:13:55.752601 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.757647 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:13:55.757766 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:13:55.801651 1117956 cri.go:89] found id: "7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:13:55.801677 1117956 cri.go:89] found id: ""
	I1212 00:13:55.801686 1117956 logs.go:284] 1 containers: [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06]
	I1212 00:13:55.801776 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.806000 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:55.807101 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:13:55.807197 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:13:55.862882 1117956 cri.go:89] found id: "ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:13:55.862939 1117956 cri.go:89] found id: ""
	I1212 00:13:55.862959 1117956 logs.go:284] 1 containers: [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b]
	I1212 00:13:55.863021 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.867414 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:13:55.867513 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:13:55.886837 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:55.911143 1117956 cri.go:89] found id: "dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:13:55.911166 1117956 cri.go:89] found id: ""
	I1212 00:13:55.911174 1117956 logs.go:284] 1 containers: [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e]
	I1212 00:13:55.911227 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.915609 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:13:55.915676 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:13:55.958223 1117956 cri.go:89] found id: "83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:13:55.958245 1117956 cri.go:89] found id: ""
	I1212 00:13:55.958253 1117956 logs.go:284] 1 containers: [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656]
	I1212 00:13:55.958335 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:13:55.962711 1117956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:13:55.962783 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 00:13:56.028590 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: W1212 00:12:53.721647    1352 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:13:56.028863 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:13:56.034262 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:13:56.034528 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:13:56.034750 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:13:56.034987 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:13:56.070382 1117956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:13:56.070463 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:13:56.093032 1117956 logs.go:123] Gathering logs for kube-apiserver [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050] ...
	I1212 00:13:56.093111 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:13:56.172050 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:56.185508 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:56.196299 1117956 logs.go:123] Gathering logs for kube-proxy [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b] ...
	I1212 00:13:56.196337 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:13:56.263365 1117956 logs.go:123] Gathering logs for container status ...
	I1212 00:13:56.263396 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:13:56.306106 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:56.324863 1117956 logs.go:123] Gathering logs for kindnet [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656] ...
	I1212 00:13:56.324893 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:13:56.374398 1117956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:13:56.374426 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:13:56.388787 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:56.499848 1117956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:13:56.499927 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 00:13:56.670531 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:56.676229 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:56.806148 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:56.876618 1117956 logs.go:123] Gathering logs for etcd [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228] ...
	I1212 00:13:56.876690 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:13:56.898651 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:57.107342 1117956 logs.go:123] Gathering logs for coredns [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a] ...
	I1212 00:13:57.107423 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:13:57.176502 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:57.177681 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:57.253199 1117956 logs.go:123] Gathering logs for kube-scheduler [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06] ...
	I1212 00:13:57.253290 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:13:57.308916 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:57.385729 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:57.453631 1117956 logs.go:123] Gathering logs for kube-controller-manager [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e] ...
	I1212 00:13:57.453666 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:13:57.559125 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:13:57.559158 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1212 00:13:57.559237 1117956 out.go:239] X Problems detected in kubelet:
	W1212 00:13:57.559256 1117956 out.go:239]   Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:13:57.559268 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:13:57.559279 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:13:57.559417 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:13:57.559435 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:13:57.559442 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:13:57.559454 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:57.669891 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:57.675480 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:57.806252 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:57.884611 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:58.187115 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:58.188489 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:58.306608 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:58.385502 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:58.674621 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:58.679220 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:58.806400 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:58.884417 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:59.170057 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:59.176387 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:59.306215 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:59.385746 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:59.671086 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:59.675174 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:13:59.807155 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:59.884351 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:00.171472 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:00.176396 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:14:00.306167 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:00.385148 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:00.670700 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:00.674877 1117956 kapi.go:107] duration metric: took 1m6.53096696s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 00:14:00.805465 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:00.885042 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:01.171813 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:01.306146 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:01.385413 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:01.674729 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:01.805980 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:01.917900 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:02.173783 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:02.315441 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:02.392132 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:02.671132 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:02.805742 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:02.884950 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:03.178620 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:03.306617 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:03.393390 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:03.670877 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:03.806178 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:03.890095 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:04.171122 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:04.307188 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:04.386968 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:04.676214 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:04.806902 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:04.885091 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:05.170745 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:05.306575 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:05.385074 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:05.674801 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:05.805686 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:05.884496 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:06.170675 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:06.306469 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:06.385422 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:06.673400 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:06.807322 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:06.884610 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:07.170577 1117956 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:14:07.308026 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:07.384284 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:07.561467 1117956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:14:07.578145 1117956 api_server.go:72] duration metric: took 1m19.115217049s to wait for apiserver process to appear ...
	I1212 00:14:07.578224 1117956 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:14:07.578271 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:14:07.578338 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:14:07.673684 1117956 kapi.go:107] duration metric: took 1m13.534254732s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 00:14:07.717756 1117956 cri.go:89] found id: "171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:14:07.717780 1117956 cri.go:89] found id: ""
	I1212 00:14:07.717789 1117956 logs.go:284] 1 containers: [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050]
	I1212 00:14:07.717846 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:07.734268 1117956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:14:07.734348 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:14:07.787418 1117956 cri.go:89] found id: "ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:14:07.787442 1117956 cri.go:89] found id: ""
	I1212 00:14:07.787450 1117956 logs.go:284] 1 containers: [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228]
	I1212 00:14:07.787506 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:07.792786 1117956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:14:07.792859 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:14:07.806366 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:07.852806 1117956 cri.go:89] found id: "14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:14:07.852828 1117956 cri.go:89] found id: ""
	I1212 00:14:07.852835 1117956 logs.go:284] 1 containers: [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a]
	I1212 00:14:07.852888 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:07.857909 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:14:07.857980 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:14:07.884680 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:07.925308 1117956 cri.go:89] found id: "7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:14:07.925381 1117956 cri.go:89] found id: ""
	I1212 00:14:07.925402 1117956 logs.go:284] 1 containers: [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06]
	I1212 00:14:07.925498 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:07.953431 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:14:07.953502 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:14:08.034957 1117956 cri.go:89] found id: "ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:14:08.034977 1117956 cri.go:89] found id: ""
	I1212 00:14:08.034987 1117956 logs.go:284] 1 containers: [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b]
	I1212 00:14:08.035039 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:08.046893 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:14:08.046964 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:14:08.092774 1117956 cri.go:89] found id: "dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:14:08.092795 1117956 cri.go:89] found id: ""
	I1212 00:14:08.092802 1117956 logs.go:284] 1 containers: [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e]
	I1212 00:14:08.092854 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:08.101620 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:14:08.101752 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:14:08.197927 1117956 cri.go:89] found id: "83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:14:08.197995 1117956 cri.go:89] found id: ""
	I1212 00:14:08.198016 1117956 logs.go:284] 1 containers: [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656]
	I1212 00:14:08.198101 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:08.203208 1117956 logs.go:123] Gathering logs for kube-apiserver [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050] ...
	I1212 00:14:08.203278 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:14:08.279301 1117956 logs.go:123] Gathering logs for etcd [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228] ...
	I1212 00:14:08.283035 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:14:08.305975 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:08.359864 1117956 logs.go:123] Gathering logs for coredns [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a] ...
	I1212 00:14:08.359935 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:14:08.395636 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:08.447738 1117956 logs.go:123] Gathering logs for kube-scheduler [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06] ...
	I1212 00:14:08.447820 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:14:08.516251 1117956 logs.go:123] Gathering logs for kube-proxy [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b] ...
	I1212 00:14:08.516327 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:14:08.611374 1117956 logs.go:123] Gathering logs for kube-controller-manager [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e] ...
	I1212 00:14:08.611399 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:14:08.721802 1117956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:14:08.721877 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:14:08.807824 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:14:08.811574 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: W1212 00:12:53.721647    1352 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:08.811838 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:08.817086 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:08.817871 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:08.818070 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:14:08.818292 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:14:08.861864 1117956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:14:08.861940 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:14:08.885082 1117956 logs.go:123] Gathering logs for container status ...
	I1212 00:14:08.885211 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:14:08.891460 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:08.990768 1117956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:14:08.990836 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:14:09.108590 1117956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:14:09.108674 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 00:14:09.309217 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:09.317651 1117956 logs.go:123] Gathering logs for kindnet [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656] ...
	I1212 00:14:09.317682 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:14:09.367835 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:14:09.367865 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1212 00:14:09.367917 1117956 out.go:239] X Problems detected in kubelet:
	W1212 00:14:09.367926 1117956 out.go:239]   Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:09.367933 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:09.367942 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:09.367951 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:14:09.367957 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:14:09.367968 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:14:09.367974 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:09.384628 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:09.807585 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:09.884457 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:10.305583 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:10.385259 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:10.806214 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:10.885039 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:11.305588 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:14:11.383942 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:11.805681 1117956 kapi.go:107] duration metric: took 1m13.527097181s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 00:14:11.807871 1117956 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-513852 cluster.
	I1212 00:14:11.810212 1117956 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 00:14:11.812100 1117956 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 00:14:11.884331 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:12.384302 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:12.884874 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:13.388004 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:13.885952 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:14.386060 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:14.884501 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:15.384544 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:15.884313 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:16.384890 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:16.884217 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:17.386089 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:17.887672 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:18.383393 1117956 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:14:18.884618 1117956 kapi.go:107] duration metric: took 1m24.523114093s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 00:14:18.886891 1117956 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, metrics-server, inspektor-gadget, storage-provisioner, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1212 00:14:18.888733 1117956 addons.go:502] enable addons completed in 1m30.815446312s: enabled=[ingress-dns cloud-spanner nvidia-device-plugin metrics-server inspektor-gadget storage-provisioner storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1212 00:14:19.368206 1117956 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 00:14:19.377869 1117956 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 00:14:19.379201 1117956 api_server.go:141] control plane version: v1.28.4
	I1212 00:14:19.379226 1117956 api_server.go:131] duration metric: took 11.800981103s to wait for apiserver health ...
	I1212 00:14:19.379234 1117956 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:14:19.379256 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:14:19.379346 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:14:19.424112 1117956 cri.go:89] found id: "171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:14:19.424137 1117956 cri.go:89] found id: ""
	I1212 00:14:19.424146 1117956 logs.go:284] 1 containers: [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050]
	I1212 00:14:19.424209 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.429070 1117956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:14:19.429176 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:14:19.473920 1117956 cri.go:89] found id: "ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:14:19.473947 1117956 cri.go:89] found id: ""
	I1212 00:14:19.473956 1117956 logs.go:284] 1 containers: [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228]
	I1212 00:14:19.474011 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.478305 1117956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:14:19.478375 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:14:19.525518 1117956 cri.go:89] found id: "14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:14:19.525540 1117956 cri.go:89] found id: ""
	I1212 00:14:19.525548 1117956 logs.go:284] 1 containers: [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a]
	I1212 00:14:19.525603 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.529973 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:14:19.530053 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:14:19.571845 1117956 cri.go:89] found id: "7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:14:19.571864 1117956 cri.go:89] found id: ""
	I1212 00:14:19.571872 1117956 logs.go:284] 1 containers: [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06]
	I1212 00:14:19.571936 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.576539 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:14:19.576647 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:14:19.618163 1117956 cri.go:89] found id: "ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:14:19.618247 1117956 cri.go:89] found id: ""
	I1212 00:14:19.618262 1117956 logs.go:284] 1 containers: [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b]
	I1212 00:14:19.618324 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.622701 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:14:19.622786 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:14:19.664658 1117956 cri.go:89] found id: "dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:14:19.664719 1117956 cri.go:89] found id: ""
	I1212 00:14:19.664741 1117956 logs.go:284] 1 containers: [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e]
	I1212 00:14:19.664822 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.669507 1117956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:14:19.669623 1117956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:14:19.720316 1117956 cri.go:89] found id: "83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:14:19.720337 1117956 cri.go:89] found id: ""
	I1212 00:14:19.720345 1117956 logs.go:284] 1 containers: [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656]
	I1212 00:14:19.720401 1117956 ssh_runner.go:195] Run: which crictl
	I1212 00:14:19.724979 1117956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:14:19.725041 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:14:19.814489 1117956 logs.go:123] Gathering logs for kube-apiserver [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050] ...
	I1212 00:14:19.814526 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050"
	I1212 00:14:19.875642 1117956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:14:19.875673 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:14:19.897542 1117956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:14:19.897572 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 00:14:20.071934 1117956 logs.go:123] Gathering logs for etcd [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228] ...
	I1212 00:14:20.071968 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228"
	I1212 00:14:20.138833 1117956 logs.go:123] Gathering logs for coredns [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a] ...
	I1212 00:14:20.138866 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a"
	I1212 00:14:20.193487 1117956 logs.go:123] Gathering logs for kube-scheduler [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06] ...
	I1212 00:14:20.193520 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06"
	I1212 00:14:20.252956 1117956 logs.go:123] Gathering logs for kube-proxy [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b] ...
	I1212 00:14:20.252988 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b"
	I1212 00:14:20.300265 1117956 logs.go:123] Gathering logs for kube-controller-manager [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e] ...
	I1212 00:14:20.300294 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e"
	I1212 00:14:20.379531 1117956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:14:20.379564 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 00:14:20.447800 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: W1212 00:12:53.721647    1352 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.448042 1117956 logs.go:138] Found kubelet problem: Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.453317 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.453517 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.453684 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.453870 1117956 logs.go:138] Found kubelet problem: Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:14:20.490481 1117956 logs.go:123] Gathering logs for container status ...
	I1212 00:14:20.490506 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:14:20.556271 1117956 logs.go:123] Gathering logs for kindnet [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656] ...
	I1212 00:14:20.556299 1117956 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656"
	I1212 00:14:20.604692 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:14:20.604716 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1212 00:14:20.604762 1117956 out.go:239] X Problems detected in kubelet:
	W1212 00:14:20.604770 1117956 out.go:239]   Dec 12 00:12:53 addons-513852 kubelet[1352]: E1212 00:12:53.721689    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.604777 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346591    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.604806 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346625    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513852" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.604822 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: W1212 00:13:21.346781    1352 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	W1212 00:14:20.604828 1117956 out.go:239]   Dec 12 00:13:21 addons-513852 kubelet[1352]: E1212 00:13:21.346807    1352 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-513852" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513852' and this object
	I1212 00:14:20.604840 1117956 out.go:309] Setting ErrFile to fd 2...
	I1212 00:14:20.604847 1117956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:30.616480 1117956 system_pods.go:59] 18 kube-system pods found
	I1212 00:14:30.616518 1117956 system_pods.go:61] "coredns-5dd5756b68-gvfh4" [b1b349a6-9a5a-4c6f-91c8-c6e3b567eea0] Running
	I1212 00:14:30.616526 1117956 system_pods.go:61] "csi-hostpath-attacher-0" [a06b11fe-ad4b-470f-82d5-384e33be061a] Running
	I1212 00:14:30.616531 1117956 system_pods.go:61] "csi-hostpath-resizer-0" [ba8b9710-a94d-4d03-9bd8-aac9f2bd8984] Running
	I1212 00:14:30.616536 1117956 system_pods.go:61] "csi-hostpathplugin-8kkcd" [65e82f73-1b35-4089-9756-16699e21e0ef] Running
	I1212 00:14:30.616542 1117956 system_pods.go:61] "etcd-addons-513852" [8e75e307-2602-402c-9981-232e674486e0] Running
	I1212 00:14:30.616547 1117956 system_pods.go:61] "kindnet-d7b6k" [2c045b49-fdb0-4d3b-8508-98e082fb738a] Running
	I1212 00:14:30.616554 1117956 system_pods.go:61] "kube-apiserver-addons-513852" [9d5b6840-fab9-4266-9c77-70d15c2c9407] Running
	I1212 00:14:30.616561 1117956 system_pods.go:61] "kube-controller-manager-addons-513852" [35ba684c-0799-4a2c-80ba-591864509f6b] Running
	I1212 00:14:30.616570 1117956 system_pods.go:61] "kube-ingress-dns-minikube" [29a08ebe-149e-48f7-96e3-41c96a718619] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 00:14:30.616579 1117956 system_pods.go:61] "kube-proxy-8kkgn" [72599864-0658-4e1a-9ce3-a884c258f4a5] Running
	I1212 00:14:30.616585 1117956 system_pods.go:61] "kube-scheduler-addons-513852" [971cdcff-7a39-46e4-a7a9-7f3e10969322] Running
	I1212 00:14:30.616591 1117956 system_pods.go:61] "metrics-server-7c66d45ddc-q8k8b" [ea3981e3-770c-404a-aa8d-66a2d769677f] Running
	I1212 00:14:30.616596 1117956 system_pods.go:61] "nvidia-device-plugin-daemonset-ssl96" [97efc1d3-32a2-484f-90ee-d7d726a4211f] Running
	I1212 00:14:30.616603 1117956 system_pods.go:61] "registry-nztsx" [d6d72673-3fd0-4b6a-8d6c-7ebec393d5cf] Running
	I1212 00:14:30.616608 1117956 system_pods.go:61] "registry-proxy-v7h4s" [a63d003e-1e86-4e98-8cec-b7ede232f639] Running
	I1212 00:14:30.616616 1117956 system_pods.go:61] "snapshot-controller-58dbcc7b99-mclbz" [6da16743-7e5d-4934-8b84-d5af75a53800] Running
	I1212 00:14:30.616621 1117956 system_pods.go:61] "snapshot-controller-58dbcc7b99-q5h4c" [796392dc-a006-4217-88f5-0525e11bf20f] Running
	I1212 00:14:30.616626 1117956 system_pods.go:61] "storage-provisioner" [8223859f-1e90-4ec7-b191-0522163b4b21] Running
	I1212 00:14:30.616633 1117956 system_pods.go:74] duration metric: took 11.237392221s to wait for pod list to return data ...
	I1212 00:14:30.616642 1117956 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:14:30.619191 1117956 default_sa.go:45] found service account: "default"
	I1212 00:14:30.619217 1117956 default_sa.go:55] duration metric: took 2.565659ms for default service account to be created ...
	I1212 00:14:30.619226 1117956 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:14:30.628459 1117956 system_pods.go:86] 18 kube-system pods found
	I1212 00:14:30.628491 1117956 system_pods.go:89] "coredns-5dd5756b68-gvfh4" [b1b349a6-9a5a-4c6f-91c8-c6e3b567eea0] Running
	I1212 00:14:30.628502 1117956 system_pods.go:89] "csi-hostpath-attacher-0" [a06b11fe-ad4b-470f-82d5-384e33be061a] Running
	I1212 00:14:30.628507 1117956 system_pods.go:89] "csi-hostpath-resizer-0" [ba8b9710-a94d-4d03-9bd8-aac9f2bd8984] Running
	I1212 00:14:30.628512 1117956 system_pods.go:89] "csi-hostpathplugin-8kkcd" [65e82f73-1b35-4089-9756-16699e21e0ef] Running
	I1212 00:14:30.628517 1117956 system_pods.go:89] "etcd-addons-513852" [8e75e307-2602-402c-9981-232e674486e0] Running
	I1212 00:14:30.628522 1117956 system_pods.go:89] "kindnet-d7b6k" [2c045b49-fdb0-4d3b-8508-98e082fb738a] Running
	I1212 00:14:30.628527 1117956 system_pods.go:89] "kube-apiserver-addons-513852" [9d5b6840-fab9-4266-9c77-70d15c2c9407] Running
	I1212 00:14:30.628539 1117956 system_pods.go:89] "kube-controller-manager-addons-513852" [35ba684c-0799-4a2c-80ba-591864509f6b] Running
	I1212 00:14:30.628551 1117956 system_pods.go:89] "kube-ingress-dns-minikube" [29a08ebe-149e-48f7-96e3-41c96a718619] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 00:14:30.628562 1117956 system_pods.go:89] "kube-proxy-8kkgn" [72599864-0658-4e1a-9ce3-a884c258f4a5] Running
	I1212 00:14:30.628568 1117956 system_pods.go:89] "kube-scheduler-addons-513852" [971cdcff-7a39-46e4-a7a9-7f3e10969322] Running
	I1212 00:14:30.628573 1117956 system_pods.go:89] "metrics-server-7c66d45ddc-q8k8b" [ea3981e3-770c-404a-aa8d-66a2d769677f] Running
	I1212 00:14:30.628580 1117956 system_pods.go:89] "nvidia-device-plugin-daemonset-ssl96" [97efc1d3-32a2-484f-90ee-d7d726a4211f] Running
	I1212 00:14:30.628585 1117956 system_pods.go:89] "registry-nztsx" [d6d72673-3fd0-4b6a-8d6c-7ebec393d5cf] Running
	I1212 00:14:30.628592 1117956 system_pods.go:89] "registry-proxy-v7h4s" [a63d003e-1e86-4e98-8cec-b7ede232f639] Running
	I1212 00:14:30.628597 1117956 system_pods.go:89] "snapshot-controller-58dbcc7b99-mclbz" [6da16743-7e5d-4934-8b84-d5af75a53800] Running
	I1212 00:14:30.628602 1117956 system_pods.go:89] "snapshot-controller-58dbcc7b99-q5h4c" [796392dc-a006-4217-88f5-0525e11bf20f] Running
	I1212 00:14:30.628609 1117956 system_pods.go:89] "storage-provisioner" [8223859f-1e90-4ec7-b191-0522163b4b21] Running
	I1212 00:14:30.628616 1117956 system_pods.go:126] duration metric: took 9.384837ms to wait for k8s-apps to be running ...
	I1212 00:14:30.628623 1117956 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:14:30.628681 1117956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:14:30.642263 1117956 system_svc.go:56] duration metric: took 13.630741ms WaitForService to wait for kubelet.
	I1212 00:14:30.642291 1117956 kubeadm.go:581] duration metric: took 1m42.179368052s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:14:30.642310 1117956 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:14:30.645504 1117956 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:14:30.645537 1117956 node_conditions.go:123] node cpu capacity is 2
	I1212 00:14:30.645548 1117956 node_conditions.go:105] duration metric: took 3.232905ms to run NodePressure ...
	I1212 00:14:30.645560 1117956 start.go:228] waiting for startup goroutines ...
	I1212 00:14:30.645566 1117956 start.go:233] waiting for cluster config update ...
	I1212 00:14:30.645580 1117956 start.go:242] writing updated cluster config ...
	I1212 00:14:30.645863 1117956 ssh_runner.go:195] Run: rm -f paused
	I1212 00:14:30.983564 1117956 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 00:14:30.985843 1117956 out.go:177] * Done! kubectl is now configured to use "addons-513852" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 12 00:23:06 addons-513852 crio[888]: time="2023-12-12 00:23:06.663047183Z" level=info msg="Starting container: 1462e89d429330fbce7b32b87d20cbd037c2a27630b9895091374339a85e3408" id=163f7ac4-6773-43b2-a62b-30b90d3bbdad name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:23:06 addons-513852 conmon[7231]: conmon 1462e89d429330fbce7b <ninfo>: container 7242 exited with status 1
	Dec 12 00:23:06 addons-513852 crio[888]: time="2023-12-12 00:23:06.675169204Z" level=info msg="Started container" PID=7242 containerID=1462e89d429330fbce7b32b87d20cbd037c2a27630b9895091374339a85e3408 description=default/hello-world-app-5d77478584-zvbn2/hello-world-app id=163f7ac4-6773-43b2-a62b-30b90d3bbdad name=/runtime.v1.RuntimeService/StartContainer sandboxID=414f7d343efbac88404391c3372686bccc0e3ddee5ee7ee10782d896d552a4b1
	Dec 12 00:23:07 addons-513852 crio[888]: time="2023-12-12 00:23:07.072194352Z" level=info msg="Removing container: 3027d977d09c0ff2e9c8cf8eec0d25a55abb825e27966d181cb021042cb1a5f1" id=fc1cd726-b0c1-42f5-ab03-25b0185d7ce1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:23:07 addons-513852 crio[888]: time="2023-12-12 00:23:07.099094569Z" level=info msg="Removed container 3027d977d09c0ff2e9c8cf8eec0d25a55abb825e27966d181cb021042cb1a5f1: default/hello-world-app-5d77478584-zvbn2/hello-world-app" id=fc1cd726-b0c1-42f5-ab03-25b0185d7ce1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:23:12 addons-513852 crio[888]: time="2023-12-12 00:23:12.583351032Z" level=info msg="Checking image status: docker.io/nginx:latest" id=e49b873b-68db-459d-ad11-0481ca19396f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:12 addons-513852 crio[888]: time="2023-12-12 00:23:12.583584921Z" level=info msg="Image docker.io/nginx:latest not found" id=e49b873b-68db-459d-ad11-0481ca19396f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:14 addons-513852 crio[888]: time="2023-12-12 00:23:14.583396565Z" level=info msg="Checking image status: busybox:stable" id=a14abcce-21cc-4bb9-b71b-6da67f071269 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:14 addons-513852 crio[888]: time="2023-12-12 00:23:14.583565290Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Dec 12 00:23:14 addons-513852 crio[888]: time="2023-12-12 00:23:14.583673586Z" level=info msg="Image busybox:stable not found" id=a14abcce-21cc-4bb9-b71b-6da67f071269 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:26 addons-513852 crio[888]: time="2023-12-12 00:23:26.583763621Z" level=info msg="Checking image status: busybox:stable" id=15b4e3f5-d0a0-4c5a-9406-fa385f6977a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:26 addons-513852 crio[888]: time="2023-12-12 00:23:26.583953023Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Dec 12 00:23:26 addons-513852 crio[888]: time="2023-12-12 00:23:26.584072125Z" level=info msg="Image busybox:stable not found" id=15b4e3f5-d0a0-4c5a-9406-fa385f6977a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:26 addons-513852 crio[888]: time="2023-12-12 00:23:26.583798853Z" level=info msg="Checking image status: docker.io/nginx:latest" id=275c3558-d3eb-40b7-87b4-18ce3c7b8ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:26 addons-513852 crio[888]: time="2023-12-12 00:23:26.584254232Z" level=info msg="Image docker.io/nginx:latest not found" id=275c3558-d3eb-40b7-87b4-18ce3c7b8ea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:39 addons-513852 crio[888]: time="2023-12-12 00:23:39.583253915Z" level=info msg="Checking image status: busybox:stable" id=3d96dda0-aa5f-438b-bf04-0f9fdf27af52 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:39 addons-513852 crio[888]: time="2023-12-12 00:23:39.583421270Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Dec 12 00:23:39 addons-513852 crio[888]: time="2023-12-12 00:23:39.583432502Z" level=info msg="Checking image status: docker.io/nginx:latest" id=11493e28-c4a8-422a-864e-36f4ab39974e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:39 addons-513852 crio[888]: time="2023-12-12 00:23:39.583539346Z" level=info msg="Image busybox:stable not found" id=3d96dda0-aa5f-438b-bf04-0f9fdf27af52 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:39 addons-513852 crio[888]: time="2023-12-12 00:23:39.583703264Z" level=info msg="Image docker.io/nginx:latest not found" id=11493e28-c4a8-422a-864e-36f4ab39974e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:39 addons-513852 crio[888]: time="2023-12-12 00:23:39.584994942Z" level=info msg="Pulling image: docker.io/nginx:latest" id=badb0b13-b212-4431-8b67-7ed155a3ba6e name=/runtime.v1.ImageService/PullImage
	Dec 12 00:23:39 addons-513852 crio[888]: time="2023-12-12 00:23:39.586981370Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 12 00:23:52 addons-513852 crio[888]: time="2023-12-12 00:23:52.582771651Z" level=info msg="Checking image status: busybox:stable" id=e06397d7-f5f1-48ed-8a8a-35f6d84172f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:23:52 addons-513852 crio[888]: time="2023-12-12 00:23:52.582944478Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Dec 12 00:23:52 addons-513852 crio[888]: time="2023-12-12 00:23:52.583057090Z" level=info msg="Image busybox:stable not found" id=e06397d7-f5f1-48ed-8a8a-35f6d84172f1 name=/runtime.v1.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	1462e89d42933       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                                             47 seconds ago      Exited              hello-world-app                          6                   414f7d343efba       hello-world-app-5d77478584-zvbn2
	bf785af0bcc4f       ghcr.io/headlamp-k8s/headlamp@sha256:7a9587036bd29304f8f1387a7245556a3c479434670b2ca58e3624d44d2a68c9                                        6 minutes ago       Running             headlamp                                 0                   df88438eca5ad       headlamp-777fd4b855-zdxdm
	1de417d0f2b01       docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7                                              9 minutes ago       Running             nginx                                    0                   56707387cbc95       nginx
	ff80a687e8468       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   28f82b502238f       csi-hostpathplugin-8kkcd
	aa48e5681627c       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          9 minutes ago       Running             csi-provisioner                          0                   28f82b502238f       csi-hostpathplugin-8kkcd
	d3c69ecfa2cb9       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            9 minutes ago       Running             liveness-probe                           0                   28f82b502238f       csi-hostpathplugin-8kkcd
	8c028d54e882b       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           9 minutes ago       Running             hostpath                                 0                   28f82b502238f       csi-hostpathplugin-8kkcd
	4ed81cedf87de       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 9 minutes ago       Running             gcp-auth                                 0                   f9d05e6cc9d84       gcp-auth-d4c87556c-mkrlc
	5dd45b8940a9d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                9 minutes ago       Running             node-driver-registrar                    0                   28f82b502238f       csi-hostpathplugin-8kkcd
	0682e9a566810       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   28f82b502238f       csi-hostpathplugin-8kkcd
	0310b20961224       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             10 minutes ago      Running             local-path-provisioner                   0                   315fbfb1377fb       local-path-provisioner-78b46b4d5c-t9rmh
	4123946222b6b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             10 minutes ago      Running             csi-attacher                             0                   f79a314459d2d       csi-hostpath-attacher-0
	7f8eb329a0602       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   5721dff2406d0       csi-hostpath-resizer-0
	771cb109d1f79       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      10 minutes ago      Running             volume-snapshot-controller               0                   722863bbe27fb       snapshot-controller-58dbcc7b99-q5h4c
	251573cb85f04       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      10 minutes ago      Running             volume-snapshot-controller               0                   da545940749e5       snapshot-controller-58dbcc7b99-mclbz
	14c1b0ffb4b48       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             10 minutes ago      Running             coredns                                  0                   f5c72f1476e94       coredns-5dd5756b68-gvfh4
	582f981971581       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             10 minutes ago      Running             storage-provisioner                      0                   66e1975f4362d       storage-provisioner
	ec5053691c9ec       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                                             11 minutes ago      Running             kube-proxy                               0                   059340538b79a       kube-proxy-8kkgn
	83d3a48bf3ebf       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                                             11 minutes ago      Running             kindnet-cni                              0                   b2d7d7c1d611c       kindnet-d7b6k
	dbef07d640e56       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                                             11 minutes ago      Running             kube-controller-manager                  0                   109c1bb7bad9b       kube-controller-manager-addons-513852
	ae1f1c30ee64c       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                                             11 minutes ago      Running             etcd                                     0                   801f18c698050       etcd-addons-513852
	171aa4fbbc251       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                                             11 minutes ago      Running             kube-apiserver                           0                   186724224b3de       kube-apiserver-addons-513852
	7074dc36c6f1d       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                                             11 minutes ago      Running             kube-scheduler                           0                   cd1d45acb0857       kube-scheduler-addons-513852
	
	* 
	* ==> coredns [14c1b0ffb4b48277c7dc12c99c2f86e3ff1d4d0d4a079632b6c2e46a0440743a] <==
	* [INFO] 10.244.0.18:32897 - 9231 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055424s
	[INFO] 10.244.0.18:32897 - 34563 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063186s
	[INFO] 10.244.0.18:32897 - 55321 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055957s
	[INFO] 10.244.0.18:32897 - 6449 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056499s
	[INFO] 10.244.0.18:32897 - 47565 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001173693s
	[INFO] 10.244.0.18:32897 - 8782 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002206737s
	[INFO] 10.244.0.18:32897 - 55467 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062415s
	[INFO] 10.244.0.18:43938 - 59019 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000149961s
	[INFO] 10.244.0.18:53763 - 62261 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043904s
	[INFO] 10.244.0.18:43938 - 49897 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061094s
	[INFO] 10.244.0.18:53763 - 17741 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109797s
	[INFO] 10.244.0.18:43938 - 10651 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000463s
	[INFO] 10.244.0.18:43938 - 55703 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000216092s
	[INFO] 10.244.0.18:53763 - 59900 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000199641s
	[INFO] 10.244.0.18:53763 - 8372 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067263s
	[INFO] 10.244.0.18:43938 - 20165 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000177423s
	[INFO] 10.244.0.18:43938 - 10144 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063736s
	[INFO] 10.244.0.18:53763 - 26676 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000221146s
	[INFO] 10.244.0.18:53763 - 57851 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000415848s
	[INFO] 10.244.0.18:53763 - 18742 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000711741s
	[INFO] 10.244.0.18:43938 - 42275 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001552684s
	[INFO] 10.244.0.18:43938 - 28907 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002086576s
	[INFO] 10.244.0.18:43938 - 52962 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057s
	[INFO] 10.244.0.18:53763 - 41639 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002489108s
	[INFO] 10.244.0.18:53763 - 19464 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055769s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-513852
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-513852
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=addons-513852
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T00_12_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-513852
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-513852"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:12:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-513852
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:23:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:23:18 +0000   Tue, 12 Dec 2023 00:12:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:23:18 +0000   Tue, 12 Dec 2023 00:12:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:23:18 +0000   Tue, 12 Dec 2023 00:12:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:23:18 +0000   Tue, 12 Dec 2023 00:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-513852
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 57bcb8d38ea843449eabb057b789c54e
	  System UUID:                4f3fd475-4e07-4c43-9995-4e2e0466c129
	  Boot ID:                    1e71add7-2409-4eb4-97fc-c7110220f3c5
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-zvbn2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  default                     task-pv-pod                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  default                     test-local-path                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m54s
	  gcp-auth                    gcp-auth-d4c87556c-mkrlc                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  headlamp                    headlamp-777fd4b855-zdxdm                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 coredns-5dd5756b68-gvfh4                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     11m
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 csi-hostpathplugin-8kkcd                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-addons-513852                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-d7b6k                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11m
	  kube-system                 kube-apiserver-addons-513852               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-addons-513852      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-8kkgn                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-addons-513852               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 snapshot-controller-58dbcc7b99-mclbz       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 snapshot-controller-58dbcc7b99-q5h4c       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  local-path-storage          local-path-provisioner-78b46b4d5c-t9rmh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-513852 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-513852 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node addons-513852 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node addons-513852 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node addons-513852 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node addons-513852 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node addons-513852 event: Registered Node addons-513852 in Controller
	  Normal  NodeReady                10m                kubelet          Node addons-513852 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001096] FS-Cache: O-key=[8] '51613b0000000000'
	[  +0.000797] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001058] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=000000009ed47378
	[  +0.001097] FS-Cache: N-key=[8] '51613b0000000000'
	[  +0.004696] FS-Cache: Duplicate cookie detected
	[  +0.000742] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000006ac44817
	[  +0.001133] FS-Cache: O-key=[8] '51613b0000000000'
	[  +0.000752] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001015] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=00000000b962c00a
	[  +0.001103] FS-Cache: N-key=[8] '51613b0000000000'
	[  +3.096598] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000002fc1e9d2
	[  +0.001145] FS-Cache: O-key=[8] '50613b0000000000'
	[  +0.000744] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000970] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=000000009ed47378
	[  +0.001095] FS-Cache: N-key=[8] '50613b0000000000'
	[  +0.330575] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=00000000caee5792
	[  +0.001154] FS-Cache: O-key=[8] '56613b0000000000'
	[  +0.000744] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=0000000001854e73
	[  +0.001084] FS-Cache: N-key=[8] '56613b0000000000'
	
	* 
	* ==> etcd [ae1f1c30ee64cea47ead22958e6a02cb88b974d4cd6d0f7c5cfea8a560f6d228] <==
	* {"level":"info","ts":"2023-12-12T00:12:49.040559Z","caller":"traceutil/trace.go:171","msg":"trace[962685417] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"171.876794ms","start":"2023-12-12T00:12:48.868563Z","end":"2023-12-12T00:12:49.04044Z","steps":["trace[962685417] 'process raft request'  (duration: 168.850734ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T00:12:49.043632Z","caller":"traceutil/trace.go:171","msg":"trace[1158355067] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"160.215882ms","start":"2023-12-12T00:12:48.883402Z","end":"2023-12-12T00:12:49.043618Z","steps":["trace[1158355067] 'process raft request'  (duration: 156.879202ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T00:12:49.754622Z","caller":"traceutil/trace.go:171","msg":"trace[307929165] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"128.71364ms","start":"2023-12-12T00:12:49.625893Z","end":"2023-12-12T00:12:49.754606Z","steps":["trace[307929165] 'process raft request'  (duration: 128.594407ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T00:12:51.068403Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"311.478003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T00:12:51.091794Z","caller":"traceutil/trace.go:171","msg":"trace[1239514687] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:406; }","duration":"334.878339ms","start":"2023-12-12T00:12:50.756893Z","end":"2023-12-12T00:12:51.091771Z","steps":["trace[1239514687] 'agreement among raft nodes before linearized reading'  (duration: 30.499317ms)","trace[1239514687] 'range keys from in-memory index tree'  (duration: 280.954784ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.091848Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T00:12:50.75688Z","time spent":"334.946785ms","remote":"127.0.0.1:53860","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-12-12T00:12:51.186039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.850258ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025751547152546 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/certificate-controller\" mod_revision:239 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" value_size:139 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T00:12:51.200802Z","caller":"traceutil/trace.go:171","msg":"trace[847282624] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"443.791298ms","start":"2023-12-12T00:12:50.756974Z","end":"2023-12-12T00:12:51.200765Z","steps":["trace[847282624] 'process raft request'  (duration: 30.75357ms)","trace[847282624] 'compare'  (duration: 212.020476ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.200946Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T00:12:50.756964Z","time spent":"443.909703ms","remote":"127.0.0.1:54036","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":207,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/certificate-controller\" mod_revision:239 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" value_size:139 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" > >"}
	{"level":"info","ts":"2023-12-12T00:12:51.226382Z","caller":"traceutil/trace.go:171","msg":"trace[396054803] linearizableReadLoop","detail":"{readStateIndex:418; appliedIndex:417; }","duration":"227.250669ms","start":"2023-12-12T00:12:50.999109Z","end":"2023-12-12T00:12:51.22636Z","steps":["trace[396054803] 'read index received'  (duration: 299.437µs)","trace[396054803] 'applied index is now lower than readState.Index'  (duration: 226.947416ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.302025Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.922647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-513852\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2023-12-12T00:12:51.314521Z","caller":"traceutil/trace.go:171","msg":"trace[409684636] range","detail":"{range_begin:/registry/minions/addons-513852; range_end:; response_count:1; response_revision:407; }","duration":"315.41615ms","start":"2023-12-12T00:12:50.999083Z","end":"2023-12-12T00:12:51.314499Z","steps":["trace[409684636] 'agreement among raft nodes before linearized reading'  (duration: 227.998995ms)","trace[409684636] 'range keys from in-memory index tree'  (duration: 74.881183ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.306287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.09872ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025751547152548 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kindnet-d7b6k.179fed2efbb7a0b1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-d7b6k.179fed2efbb7a0b1\" value_size:630 lease:8128025751547151790 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T00:12:51.314946Z","caller":"traceutil/trace.go:171","msg":"trace[190259492] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"223.479843ms","start":"2023-12-12T00:12:51.091458Z","end":"2023-12-12T00:12:51.314938Z","steps":["trace[190259492] 'process raft request'  (duration: 223.399247ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T00:12:51.319673Z","caller":"traceutil/trace.go:171","msg":"trace[653067686] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"251.562609ms","start":"2023-12-12T00:12:51.068094Z","end":"2023-12-12T00:12:51.319657Z","steps":["trace[653067686] 'process raft request'  (duration: 128.049085ms)","trace[653067686] 'compare'  (duration: 105.796111ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.314792Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T00:12:50.999045Z","time spent":"315.720281ms","remote":"127.0.0.1:54008","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":5767,"request content":"key:\"/registry/minions/addons-513852\" "}
	{"level":"warn","ts":"2023-12-12T00:12:51.892819Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.136689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-12-12T00:12:51.905677Z","caller":"traceutil/trace.go:171","msg":"trace[552383047] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:413; }","duration":"159.999937ms","start":"2023-12-12T00:12:51.745663Z","end":"2023-12-12T00:12:51.905663Z","steps":["trace[552383047] 'agreement among raft nodes before linearized reading'  (duration: 60.646448ms)","trace[552383047] 'range keys from in-memory index tree'  (duration: 86.456248ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.905439Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.80366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T00:12:51.906022Z","caller":"traceutil/trace.go:171","msg":"trace[622420802] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:413; }","duration":"160.401952ms","start":"2023-12-12T00:12:51.745611Z","end":"2023-12-12T00:12:51.906013Z","steps":["trace[622420802] 'agreement among raft nodes before linearized reading'  (duration: 60.717134ms)","trace[622420802] 'range keys from in-memory index tree'  (duration: 99.074358ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T00:12:51.905462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.599555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-12-12T00:12:51.906102Z","caller":"traceutil/trace.go:171","msg":"trace[844617221] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:413; }","duration":"160.244353ms","start":"2023-12-12T00:12:51.74585Z","end":"2023-12-12T00:12:51.906095Z","steps":["trace[844617221] 'agreement among raft nodes before linearized reading'  (duration: 60.444543ms)","trace[844617221] 'range keys from in-memory index tree'  (duration: 99.126894ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T00:22:29.9633Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1775}
	{"level":"info","ts":"2023-12-12T00:22:29.99196Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1775,"took":"28.009484ms","hash":2105530962}
	{"level":"info","ts":"2023-12-12T00:22:29.992093Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2105530962,"revision":1775,"compact-revision":-1}
	
	* 
	* ==> gcp-auth [4ed81cedf87dec99686adf2b83b9050047b670a0deeda2400f065d9d5dd5519a] <==
	* 2023/12/12 00:14:11 GCP Auth Webhook started!
	2023/12/12 00:14:41 Ready to marshal response ...
	2023/12/12 00:14:41 Ready to write response ...
	2023/12/12 00:14:46 Ready to marshal response ...
	2023/12/12 00:14:46 Ready to write response ...
	2023/12/12 00:15:00 Ready to marshal response ...
	2023/12/12 00:15:00 Ready to write response ...
	2023/12/12 00:15:00 Ready to marshal response ...
	2023/12/12 00:15:00 Ready to write response ...
	2023/12/12 00:17:05 Ready to marshal response ...
	2023/12/12 00:17:05 Ready to write response ...
	2023/12/12 00:17:33 Ready to marshal response ...
	2023/12/12 00:17:33 Ready to write response ...
	2023/12/12 00:17:33 Ready to marshal response ...
	2023/12/12 00:17:33 Ready to write response ...
	2023/12/12 00:17:33 Ready to marshal response ...
	2023/12/12 00:17:33 Ready to write response ...
	2023/12/12 00:17:52 Ready to marshal response ...
	2023/12/12 00:17:52 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  00:23:54 up  7:06,  0 users,  load average: 0.17, 0.32, 0.43
	Linux addons-513852 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [83d3a48bf3ebfa60132f1e7256f863596034352918ae1521b6d63c05eb55f656] <==
	* I1212 00:21:51.346941       1 main.go:227] handling current node
	I1212 00:22:01.357641       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:22:01.357667       1 main.go:227] handling current node
	I1212 00:22:11.369960       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:22:11.369984       1 main.go:227] handling current node
	I1212 00:22:21.382578       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:22:21.382716       1 main.go:227] handling current node
	I1212 00:22:31.394755       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:22:31.394780       1 main.go:227] handling current node
	I1212 00:22:41.401312       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:22:41.401338       1 main.go:227] handling current node
	I1212 00:22:51.413604       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:22:51.413734       1 main.go:227] handling current node
	I1212 00:23:01.426851       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:01.426878       1 main.go:227] handling current node
	I1212 00:23:11.434671       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:11.434702       1 main.go:227] handling current node
	I1212 00:23:21.438888       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:21.438916       1 main.go:227] handling current node
	I1212 00:23:31.449533       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:31.449560       1 main.go:227] handling current node
	I1212 00:23:41.462398       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:41.462430       1 main.go:227] handling current node
	I1212 00:23:51.467103       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:23:51.467133       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [171aa4fbbc251dce3707bf2c16327dcc857d6bd10c5d919000bfdc4dff92e050] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 00:13:52.038096       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 00:13:52.038947       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.192.0:443: connect: connection refused
	E1212 00:13:52.039765       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.192.0:443: connect: connection refused
	E1212 00:13:52.046484       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.192.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.192.0:443: connect: connection refused
	I1212 00:13:52.229239       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 00:14:32.217910       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 00:14:46.180945       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 00:14:46.519267       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.246.84"}
	I1212 00:14:48.402186       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1212 00:14:48.417603       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1212 00:14:49.434212       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1212 00:14:53.073675       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1212 00:17:05.576396       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.73.0"}
	I1212 00:17:32.339370       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:17:32.339438       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:17:32.340319       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:17:32.340368       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:17:32.340633       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:17:32.340677       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:17:33.788468       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.102.102"}
	I1212 00:22:32.340243       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:22:32.340314       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 00:22:32.340926       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 00:22:32.341022       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [dbef07d640e56f637bf2c00be2553ebcd338b8974898aaa2f8e9e768207a4f8e] <==
	* I1212 00:17:52.193283       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 00:17:56.421693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.947µs"
	W1212 00:18:01.770115       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:18:01.770147       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 00:18:09.596363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="59.945µs"
	W1212 00:18:32.503394       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:18:32.503427       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 00:18:47.544254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="162.629µs"
	I1212 00:19:01.597184       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="85.109µs"
	W1212 00:19:29.992350       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:19:29.992388       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 00:20:14.251285       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:20:14.251318       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 00:20:19.744850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="88.892µs"
	I1212 00:20:33.595005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="71.637µs"
	W1212 00:20:52.744695       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:20:52.744727       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 00:21:48.382875       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:21:48.382908       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 00:22:37.851532       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:22:37.851571       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 00:23:07.095181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="85.29µs"
	I1212 00:23:20.595139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="76.281µs"
	W1212 00:23:23.897616       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:23:23.897753       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [ec5053691c9ec92dbe87d4d1a2a25332a96646ca4628362fc3a6f4ce2f7c3f0b] <==
	* I1212 00:12:53.482323       1 server_others.go:69] "Using iptables proxy"
	I1212 00:12:53.564308       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1212 00:12:53.629018       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:12:53.639669       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:12:53.639775       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:12:53.639806       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:12:53.639908       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:12:53.640208       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:12:53.640392       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:12:53.641320       1 config.go:188] "Starting service config controller"
	I1212 00:12:53.641411       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:12:53.641462       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:12:53.641493       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:12:53.642035       1 config.go:315] "Starting node config controller"
	I1212 00:12:53.642226       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:12:53.742606       1 shared_informer.go:318] Caches are synced for node config
	I1212 00:12:53.751432       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:12:53.757640       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [7074dc36c6f1d2a5758f2630a1349a894b80dc801069cf4425f0df9c0e015b06] <==
	* W1212 00:12:32.615189       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:32.615204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:32.615258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 00:12:32.615272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 00:12:32.615323       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:32.615337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:32.615391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:12:32.615405       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 00:12:32.615462       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 00:12:32.615478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 00:12:32.615534       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:32.615549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:32.615608       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:32.615623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:33.420377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:12:33.420413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 00:12:33.435224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 00:12:33.435335       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 00:12:33.473986       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:33.474086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:12:33.487719       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:12:33.487748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 00:12:33.530959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:12:33.530993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1212 00:12:34.200812       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 00:22:40 addons-513852 kubelet[1352]: E1212 00:22:40.582728    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zvbn2_default(27ad768a-ef86-4e41-b7b9-73dcb1adcf03)\"" pod="default/hello-world-app-5d77478584-zvbn2" podUID="27ad768a-ef86-4e41-b7b9-73dcb1adcf03"
	Dec 12 00:22:46 addons-513852 kubelet[1352]: E1212 00:22:46.583193    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="c44c9440-0c5a-45d9-bb4e-13c9f53d6c3a"
	Dec 12 00:22:51 addons-513852 kubelet[1352]: E1212 00:22:51.583956    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="7c63f77a-c315-489d-8fed-a3446132fc8a"
	Dec 12 00:22:53 addons-513852 kubelet[1352]: I1212 00:22:53.582821    1352 scope.go:117] "RemoveContainer" containerID="3027d977d09c0ff2e9c8cf8eec0d25a55abb825e27966d181cb021042cb1a5f1"
	Dec 12 00:22:53 addons-513852 kubelet[1352]: E1212 00:22:53.583123    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zvbn2_default(27ad768a-ef86-4e41-b7b9-73dcb1adcf03)\"" pod="default/hello-world-app-5d77478584-zvbn2" podUID="27ad768a-ef86-4e41-b7b9-73dcb1adcf03"
	Dec 12 00:22:57 addons-513852 kubelet[1352]: E1212 00:22:57.584432    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="c44c9440-0c5a-45d9-bb4e-13c9f53d6c3a"
	Dec 12 00:23:03 addons-513852 kubelet[1352]: E1212 00:23:03.584603    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="7c63f77a-c315-489d-8fed-a3446132fc8a"
	Dec 12 00:23:06 addons-513852 kubelet[1352]: I1212 00:23:06.582273    1352 scope.go:117] "RemoveContainer" containerID="3027d977d09c0ff2e9c8cf8eec0d25a55abb825e27966d181cb021042cb1a5f1"
	Dec 12 00:23:07 addons-513852 kubelet[1352]: I1212 00:23:07.070767    1352 scope.go:117] "RemoveContainer" containerID="3027d977d09c0ff2e9c8cf8eec0d25a55abb825e27966d181cb021042cb1a5f1"
	Dec 12 00:23:07 addons-513852 kubelet[1352]: I1212 00:23:07.071006    1352 scope.go:117] "RemoveContainer" containerID="1462e89d429330fbce7b32b87d20cbd037c2a27630b9895091374339a85e3408"
	Dec 12 00:23:07 addons-513852 kubelet[1352]: E1212 00:23:07.071279    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zvbn2_default(27ad768a-ef86-4e41-b7b9-73dcb1adcf03)\"" pod="default/hello-world-app-5d77478584-zvbn2" podUID="27ad768a-ef86-4e41-b7b9-73dcb1adcf03"
	Dec 12 00:23:12 addons-513852 kubelet[1352]: E1212 00:23:12.583829    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="c44c9440-0c5a-45d9-bb4e-13c9f53d6c3a"
	Dec 12 00:23:14 addons-513852 kubelet[1352]: E1212 00:23:14.583884    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="7c63f77a-c315-489d-8fed-a3446132fc8a"
	Dec 12 00:23:20 addons-513852 kubelet[1352]: I1212 00:23:20.583269    1352 scope.go:117] "RemoveContainer" containerID="1462e89d429330fbce7b32b87d20cbd037c2a27630b9895091374339a85e3408"
	Dec 12 00:23:20 addons-513852 kubelet[1352]: E1212 00:23:20.583567    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zvbn2_default(27ad768a-ef86-4e41-b7b9-73dcb1adcf03)\"" pod="default/hello-world-app-5d77478584-zvbn2" podUID="27ad768a-ef86-4e41-b7b9-73dcb1adcf03"
	Dec 12 00:23:26 addons-513852 kubelet[1352]: E1212 00:23:26.584768    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="c44c9440-0c5a-45d9-bb4e-13c9f53d6c3a"
	Dec 12 00:23:26 addons-513852 kubelet[1352]: E1212 00:23:26.585164    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="7c63f77a-c315-489d-8fed-a3446132fc8a"
	Dec 12 00:23:33 addons-513852 kubelet[1352]: I1212 00:23:33.583105    1352 scope.go:117] "RemoveContainer" containerID="1462e89d429330fbce7b32b87d20cbd037c2a27630b9895091374339a85e3408"
	Dec 12 00:23:33 addons-513852 kubelet[1352]: E1212 00:23:33.583401    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zvbn2_default(27ad768a-ef86-4e41-b7b9-73dcb1adcf03)\"" pod="default/hello-world-app-5d77478584-zvbn2" podUID="27ad768a-ef86-4e41-b7b9-73dcb1adcf03"
	Dec 12 00:23:35 addons-513852 kubelet[1352]: E1212 00:23:35.775833    1352 manager.go:1106] Failed to create existing container: /crio-aa1a620e73820ff4edeb2337e4458d387d6ae0ea3f8b0915883b475953f2b116: Error finding container aa1a620e73820ff4edeb2337e4458d387d6ae0ea3f8b0915883b475953f2b116: Status 404 returned error can't find the container with id aa1a620e73820ff4edeb2337e4458d387d6ae0ea3f8b0915883b475953f2b116
	Dec 12 00:23:39 addons-513852 kubelet[1352]: E1212 00:23:39.584142    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="7c63f77a-c315-489d-8fed-a3446132fc8a"
	Dec 12 00:23:40 addons-513852 kubelet[1352]: E1212 00:23:40.661031    1352 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3a2f6ef39247adc6b8fe9ec88fd8af7458b608c5e4ad29972c7f629ed133a057/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3a2f6ef39247adc6b8fe9ec88fd8af7458b608c5e4ad29972c7f629ed133a057/diff: no such file or directory, extraDiskErr: <nil>
	Dec 12 00:23:48 addons-513852 kubelet[1352]: I1212 00:23:48.583018    1352 scope.go:117] "RemoveContainer" containerID="1462e89d429330fbce7b32b87d20cbd037c2a27630b9895091374339a85e3408"
	Dec 12 00:23:48 addons-513852 kubelet[1352]: E1212 00:23:48.583344    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zvbn2_default(27ad768a-ef86-4e41-b7b9-73dcb1adcf03)\"" pod="default/hello-world-app-5d77478584-zvbn2" podUID="27ad768a-ef86-4e41-b7b9-73dcb1adcf03"
	Dec 12 00:23:52 addons-513852 kubelet[1352]: E1212 00:23:52.583465    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="7c63f77a-c315-489d-8fed-a3446132fc8a"
	
	* 
	* ==> storage-provisioner [582f981971581f27a18ecff9abba1e059d9b6df136537998c5f2f99c23aeb845] <==
	* I1212 00:13:22.065137       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:13:22.099145       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:13:22.099228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:13:22.174350       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:13:22.175830       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-513852_5c42c213-ab15-47c4-9420-9dcb09a350b9!
	I1212 00:13:22.194450       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aac062d1-c946-48d9-b1db-64590d74d0c4", APIVersion:"v1", ResourceVersion:"876", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-513852_5c42c213-ab15-47c4-9420-9dcb09a350b9 became leader
	I1212 00:13:22.276893       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-513852_5c42c213-ab15-47c4-9420-9dcb09a350b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-513852 -n addons-513852
helpers_test.go:261: (dbg) Run:  kubectl --context addons-513852 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: task-pv-pod test-local-path
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-513852 describe pod task-pv-pod test-local-path
helpers_test.go:282: (dbg) kubectl --context addons-513852 describe pod task-pv-pod test-local-path:

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-513852/192.168.49.2
	Start Time:       Tue, 12 Dec 2023 00:17:52 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v6hnq (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-v6hnq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m4s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-513852
	  Warning  Failed     4m2s (x2 over 5m33s)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m20s (x4 over 6m4s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     98s (x4 over 5m33s)   kubelet            Error: ErrImagePull
	  Warning  Failed     98s (x2 over 3m9s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     84s (x6 over 5m33s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    59s (x8 over 5m33s)   kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-513852/192.168.49.2
	Start Time:       Tue, 12 Dec 2023 00:15:05 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rs7qj (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-rs7qj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m51s                  default-scheduler  Successfully assigned default/test-local-path to addons-513852
	  Warning  Failed     8m21s                  kubelet            Failed to pull image "busybox:stable": loading manifest for target platform: reading manifest sha256:1e190d3f03348e063cf58d643c2b39bed38f19d77a3accf616a0f53460671358 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m3s (x4 over 8m21s)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m3s (x3 over 7m39s)   kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m51s (x6 over 8m20s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m40s (x7 over 8m20s)  kubelet            Back-off pulling image "busybox:stable"
	  Normal   Pulling    3m38s (x5 over 8m51s)  kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (371.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (185.91s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-513852 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-513852 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-513852 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7c63f77a-c315-489d-8fed-a3446132fc8a] Pending
helpers_test.go:344: "test-local-path" [7c63f77a-c315-489d-8fed-a3446132fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
addons_test.go:885: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:885: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-513852 -n addons-513852
addons_test.go:885: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2023-12-12 00:18:04.932845411 +0000 UTC m=+428.287831776
addons_test.go:885: (dbg) Run:  kubectl --context addons-513852 describe po test-local-path -n default
addons_test.go:885: (dbg) kubectl --context addons-513852 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-513852/192.168.49.2
Start Time:       Tue, 12 Dec 2023 00:15:05 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.23
IPs:
IP:  10.244.0.23
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rs7qj (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-rs7qj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/test-local-path to addons-513852
Warning  Failed     2m30s                kubelet            Failed to pull image "busybox:stable": loading manifest for target platform: reading manifest sha256:1e190d3f03348e063cf58d643c2b39bed38f19d77a3accf616a0f53460671358 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     54s (x3 over 2m30s)  kubelet            Error: ErrImagePull
Warning  Failed     54s (x2 over 108s)   kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    17s (x5 over 2m29s)  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     17s (x5 over 2m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    6s (x4 over 3m)      kubelet            Pulling image "busybox:stable"
addons_test.go:885: (dbg) Run:  kubectl --context addons-513852 logs test-local-path -n default
addons_test.go:885: (dbg) Non-zero exit: kubectl --context addons-513852 logs test-local-path -n default: exit status 1 (124.320685ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:885: kubectl --context addons-513852 logs test-local-path -n default: exit status 1
addons_test.go:886: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
--- FAIL: TestAddons/parallel/LocalPath (185.91s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ee03670d-7b8a-47cc-91b0-4f1e23b5629c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.027302187s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-885247 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-885247 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-885247 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-885247 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [977de9af-f406-4174-970f-2e5b50d0b31f] Pending
helpers_test.go:344: "sp-pod" [977de9af-f406-4174-970f-2e5b50d0b31f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1212 00:29:31.003188 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:31.009087 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:31.019372 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:31.039756 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:31.080064 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:31.160508 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:31.321559 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:31.642276 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:32.283171 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:33.563833 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:36.124238 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:41.245407 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:29:51.485920 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:30:11.966618 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:30:52.927521 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-885247 -n functional-885247
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2023-12-12 00:31:23.555515188 +0000 UTC m=+1226.910501553
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-885247 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-885247 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-885247/192.168.49.2
Start Time:       Tue, 12 Dec 2023 00:28:23 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8c9zf (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-8c9zf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  3m                  default-scheduler  Successfully assigned default/sp-pod to functional-885247
Warning  Failed     2m2s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     31s (x2 over 2m2s)  kubelet            Error: ErrImagePull
Warning  Failed     31s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    16s (x2 over 2m2s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     16s (x2 over 2m2s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    1s (x3 over 3m)     kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-885247 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-885247 logs sp-pod -n default: exit status 1 (105.478522ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-885247 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-885247
helpers_test.go:235: (dbg) docker inspect functional-885247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960",
	        "Created": "2023-12-12T00:25:39.70663759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1131812,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:25:40.035666243Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/hostname",
	        "HostsPath": "/var/lib/docker/containers/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/hosts",
	        "LogPath": "/var/lib/docker/containers/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960-json.log",
	        "Name": "/functional-885247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-885247:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-885247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3ef2818982dc4c7dc970ad99941af0f38666f12070f7d8130f324f878ba2a204-init/diff:/var/lib/docker/overlay2/c2a4fdcea722509eecd2151e38f63a7bf15f9db138183afe352dd4d4bae4600f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ef2818982dc4c7dc970ad99941af0f38666f12070f7d8130f324f878ba2a204/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ef2818982dc4c7dc970ad99941af0f38666f12070f7d8130f324f878ba2a204/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ef2818982dc4c7dc970ad99941af0f38666f12070f7d8130f324f878ba2a204/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-885247",
	                "Source": "/var/lib/docker/volumes/functional-885247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-885247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-885247",
	                "name.minikube.sigs.k8s.io": "functional-885247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4e7ecceebbd14714274e8449865d16a1fee19c7ebac24f7d475519d4cab6dbcd",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34020"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34016"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34018"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34017"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4e7ecceebbd1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-885247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d89df14e9405",
	                        "functional-885247"
	                    ],
	                    "NetworkID": "91d31a3bacab9f10ad5331ebd87e51dc6fc2680a135987808c9724cab305031e",
	                    "EndpointID": "1be3389717d616449574516d046c14a0e7902c763ae92c232c3dcbbf0d2c9d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-885247 -n functional-885247
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 logs -n 25: (1.816694281s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-885247 ssh sudo                                               | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:27 UTC | 12 Dec 23 00:27 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-885247                                                        | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:27 UTC | 12 Dec 23 00:27 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-885247 ssh                                                    | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:27 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-885247 cache reload                                           | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:27 UTC | 12 Dec 23 00:27 UTC |
	| ssh     | functional-885247 ssh                                                    | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:27 UTC | 12 Dec 23 00:27 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:27 UTC | 12 Dec 23 00:27 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 12 Dec 23 00:27 UTC | 12 Dec 23 00:27 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-885247 kubectl --                                             | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:27 UTC | 12 Dec 23 00:27 UTC |
	|         | --context functional-885247                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-885247                                                     | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:27 UTC | 12 Dec 23 00:28 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	| service | invalid-svc -p                                                           | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC |                     |
	|         | functional-885247                                                        |                   |         |         |                     |                     |
	| cp      | functional-885247 cp                                                     | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC | 12 Dec 23 00:28 UTC |
	|         | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-885247 config unset                                           | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC | 12 Dec 23 00:28 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-885247 config get                                             | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-885247 config set                                             | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC | 12 Dec 23 00:28 UTC |
	|         | cpus 2                                                                   |                   |         |         |                     |                     |
	| config  | functional-885247 config get                                             | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC | 12 Dec 23 00:28 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-885247 ssh -n                                                 | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC | 12 Dec 23 00:28 UTC |
	|         | functional-885247 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-885247 config unset                                           | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC | 12 Dec 23 00:28 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-885247 config get                                             | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-885247 ssh echo                                               | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC | 12 Dec 23 00:28 UTC |
	|         | hello                                                                    |                   |         |         |                     |                     |
	| cp      | functional-885247 cp                                                     | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC | 12 Dec 23 00:28 UTC |
	|         | functional-885247:/home/docker/cp-test.txt                               |                   |         |         |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd4099322654/001/cp-test.txt               |                   |         |         |                     |                     |
	| ssh     | functional-885247 ssh cat                                                | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC | 12 Dec 23 00:28 UTC |
	|         | /etc/hostname                                                            |                   |         |         |                     |                     |
	| ssh     | functional-885247 ssh -n                                                 | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC | 12 Dec 23 00:28 UTC |
	|         | functional-885247 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| tunnel  | functional-885247 tunnel                                                 | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-885247 tunnel                                                 | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-885247 tunnel                                                 | functional-885247 | jenkins | v1.32.0 | 12 Dec 23 00:28 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:27:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:27:36.572810 1136061 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:27:36.572947 1136061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:27:36.572950 1136061 out.go:309] Setting ErrFile to fd 2...
	I1212 00:27:36.572955 1136061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:27:36.573210 1136061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:27:36.573601 1136061 out.go:303] Setting JSON to false
	I1212 00:27:36.574708 1136061 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25803,"bootTime":1702315054,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:27:36.574766 1136061 start.go:138] virtualization:  
	I1212 00:27:36.579395 1136061 out.go:177] * [functional-885247] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:27:36.581487 1136061 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:27:36.581556 1136061 notify.go:220] Checking for updates...
	I1212 00:27:36.585176 1136061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:27:36.587467 1136061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:27:36.589632 1136061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:27:36.591997 1136061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:27:36.593881 1136061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:27:36.596437 1136061 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:27:36.596596 1136061 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:27:36.623397 1136061 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:27:36.623494 1136061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:27:36.707728 1136061 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-12 00:27:36.697984248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:27:36.707821 1136061 docker.go:295] overlay module found
	I1212 00:27:36.711355 1136061 out.go:177] * Using the docker driver based on existing profile
	I1212 00:27:36.713715 1136061 start.go:298] selected driver: docker
	I1212 00:27:36.713723 1136061 start.go:902] validating driver "docker" against &{Name:functional-885247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-885247 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:27:36.713828 1136061 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:27:36.713917 1136061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:27:36.781153 1136061 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-12 00:27:36.771264722 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:27:36.781584 1136061 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:27:36.781646 1136061 cni.go:84] Creating CNI manager for ""
	I1212 00:27:36.781652 1136061 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:27:36.781662 1136061 start_flags.go:323] config:
	{Name:functional-885247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-885247 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:27:36.784173 1136061 out.go:177] * Starting control plane node functional-885247 in cluster functional-885247
	I1212 00:27:36.786115 1136061 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 00:27:36.788053 1136061 out.go:177] * Pulling base image ...
	I1212 00:27:36.790134 1136061 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:27:36.790173 1136061 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1212 00:27:36.790180 1136061 cache.go:56] Caching tarball of preloaded images
	I1212 00:27:36.790208 1136061 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:27:36.790257 1136061 preload.go:174] Found /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:27:36.790266 1136061 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 00:27:36.790376 1136061 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/config.json ...
	I1212 00:27:36.807447 1136061 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon, skipping pull
	I1212 00:27:36.807462 1136061 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in daemon, skipping load
	I1212 00:27:36.807480 1136061 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:27:36.807541 1136061 start.go:365] acquiring machines lock for functional-885247: {Name:mk6d08eac00ff7b7bb799338368fa8f91b7e0417 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:27:36.807609 1136061 start.go:369] acquired machines lock for "functional-885247" in 42.354µs
	I1212 00:27:36.807632 1136061 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:27:36.807637 1136061 fix.go:54] fixHost starting: 
	I1212 00:27:36.807906 1136061 cli_runner.go:164] Run: docker container inspect functional-885247 --format={{.State.Status}}
	I1212 00:27:36.826194 1136061 fix.go:102] recreateIfNeeded on functional-885247: state=Running err=<nil>
	W1212 00:27:36.826213 1136061 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 00:27:36.828835 1136061 out.go:177] * Updating the running docker "functional-885247" container ...
	I1212 00:27:36.830841 1136061 machine.go:88] provisioning docker machine ...
	I1212 00:27:36.830859 1136061 ubuntu.go:169] provisioning hostname "functional-885247"
	I1212 00:27:36.830935 1136061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
	I1212 00:27:36.848851 1136061 main.go:141] libmachine: Using SSH client type: native
	I1212 00:27:36.849311 1136061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34020 <nil> <nil>}
	I1212 00:27:36.849321 1136061 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-885247 && echo "functional-885247" | sudo tee /etc/hostname
	I1212 00:27:37.013082 1136061 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-885247
	
	I1212 00:27:37.013153 1136061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
	I1212 00:27:37.036946 1136061 main.go:141] libmachine: Using SSH client type: native
	I1212 00:27:37.037412 1136061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34020 <nil> <nil>}
	I1212 00:27:37.037429 1136061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-885247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-885247/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-885247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:27:37.178545 1136061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:27:37.178560 1136061 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 00:27:37.178577 1136061 ubuntu.go:177] setting up certificates
	I1212 00:27:37.178586 1136061 provision.go:83] configureAuth start
	I1212 00:27:37.178647 1136061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-885247
	I1212 00:27:37.197478 1136061 provision.go:138] copyHostCerts
	I1212 00:27:37.197533 1136061 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem, removing ...
	I1212 00:27:37.197575 1136061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 00:27:37.197646 1136061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 00:27:37.197737 1136061 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem, removing ...
	I1212 00:27:37.197741 1136061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 00:27:37.197764 1136061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 00:27:37.197814 1136061 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem, removing ...
	I1212 00:27:37.197817 1136061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 00:27:37.197847 1136061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 00:27:37.197890 1136061 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.functional-885247 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-885247]
	I1212 00:27:37.441178 1136061 provision.go:172] copyRemoteCerts
	I1212 00:27:37.441230 1136061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:27:37.441291 1136061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
	I1212 00:27:37.459136 1136061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
	I1212 00:27:37.559647 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:27:37.587588 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 00:27:37.615345 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:27:37.643147 1136061 provision.go:86] duration metric: configureAuth took 464.548635ms
	I1212 00:27:37.643163 1136061 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:27:37.643363 1136061 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:27:37.643463 1136061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
	I1212 00:27:37.660626 1136061 main.go:141] libmachine: Using SSH client type: native
	I1212 00:27:37.661029 1136061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34020 <nil> <nil>}
	I1212 00:27:37.661046 1136061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:27:43.101883 1136061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:27:43.101897 1136061 machine.go:91] provisioned docker machine in 6.271048775s
	I1212 00:27:43.101907 1136061 start.go:300] post-start starting for "functional-885247" (driver="docker")
	I1212 00:27:43.101916 1136061 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:27:43.102003 1136061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:27:43.102052 1136061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
	I1212 00:27:43.119856 1136061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
	I1212 00:27:43.220043 1136061 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:27:43.224266 1136061 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:27:43.224291 1136061 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:27:43.224301 1136061 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:27:43.224307 1136061 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:27:43.224317 1136061 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 00:27:43.224376 1136061 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 00:27:43.224455 1136061 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> 11173832.pem in /etc/ssl/certs
	I1212 00:27:43.224529 1136061 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/test/nested/copy/1117383/hosts -> hosts in /etc/test/nested/copy/1117383
	I1212 00:27:43.224573 1136061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1117383
	I1212 00:27:43.235012 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 00:27:43.262112 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/test/nested/copy/1117383/hosts --> /etc/test/nested/copy/1117383/hosts (40 bytes)
	I1212 00:27:43.289318 1136061 start.go:303] post-start completed in 187.324608ms
	I1212 00:27:43.289405 1136061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:27:43.289444 1136061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
	I1212 00:27:43.308553 1136061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
	I1212 00:27:43.403651 1136061 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:27:43.409552 1136061 fix.go:56] fixHost completed within 6.601908656s
	I1212 00:27:43.409566 1136061 start.go:83] releasing machines lock for "functional-885247", held for 6.601950033s
	I1212 00:27:43.409634 1136061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-885247
	I1212 00:27:43.426974 1136061 ssh_runner.go:195] Run: cat /version.json
	I1212 00:27:43.426996 1136061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:27:43.427014 1136061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
	I1212 00:27:43.427047 1136061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
	I1212 00:27:43.445077 1136061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
	I1212 00:27:43.453389 1136061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
	I1212 00:27:43.546417 1136061 ssh_runner.go:195] Run: systemctl --version
	I1212 00:27:43.712332 1136061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:27:43.869426 1136061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:27:43.874907 1136061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:27:43.885395 1136061 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:27:43.885460 1136061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:27:43.895686 1136061 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:27:43.895699 1136061 start.go:475] detecting cgroup driver to use...
	I1212 00:27:43.895729 1136061 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:27:43.895785 1136061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:27:43.910590 1136061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:27:43.923918 1136061 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:27:43.923970 1136061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:27:43.938497 1136061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:27:43.951875 1136061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:27:44.080607 1136061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:27:44.212720 1136061 docker.go:219] disabling docker service ...
	I1212 00:27:44.212791 1136061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:27:44.228689 1136061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:27:44.242263 1136061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:27:44.380851 1136061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:27:44.515736 1136061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:27:44.529416 1136061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:27:44.549015 1136061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 00:27:44.549069 1136061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:27:44.560746 1136061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:27:44.560821 1136061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:27:44.574640 1136061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:27:44.587140 1136061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:27:44.599002 1136061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:27:44.610543 1136061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:27:44.620657 1136061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:27:44.630764 1136061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:27:44.750926 1136061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:27:44.911408 1136061 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:27:44.911478 1136061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:27:44.915853 1136061 start.go:543] Will wait 60s for crictl version
	I1212 00:27:44.915911 1136061 ssh_runner.go:195] Run: which crictl
	I1212 00:27:44.919985 1136061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:27:44.960961 1136061 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 00:27:44.961045 1136061 ssh_runner.go:195] Run: crio --version
	I1212 00:27:45.006698 1136061 ssh_runner.go:195] Run: crio --version
	I1212 00:27:45.057761 1136061 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1212 00:27:45.059894 1136061 cli_runner.go:164] Run: docker network inspect functional-885247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:27:45.078671 1136061 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:27:45.085398 1136061 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:27:45.087462 1136061 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:27:45.087562 1136061 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:27:45.144058 1136061 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 00:27:45.144070 1136061 crio.go:415] Images already preloaded, skipping extraction
	I1212 00:27:45.144134 1136061 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:27:45.187725 1136061 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 00:27:45.187738 1136061 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:27:45.187813 1136061 ssh_runner.go:195] Run: crio config
	I1212 00:27:45.254582 1136061 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:27:45.254612 1136061 cni.go:84] Creating CNI manager for ""
	I1212 00:27:45.254621 1136061 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:27:45.254631 1136061 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:27:45.254654 1136061 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-885247 NodeName:functional-885247 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:27:45.254790 1136061 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-885247"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:27:45.254855 1136061 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=functional-885247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-885247 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1212 00:27:45.254919 1136061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:27:45.266599 1136061 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:27:45.266671 1136061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:27:45.279395 1136061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (427 bytes)
	I1212 00:27:45.300781 1136061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:27:45.322803 1136061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1948 bytes)
	I1212 00:27:45.343639 1136061 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:27:45.348257 1136061 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247 for IP: 192.168.49.2
	I1212 00:27:45.348279 1136061 certs.go:190] acquiring lock for shared ca certs: {Name:mk50788b4819ee46b65351495e43cdf246a6ddce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:27:45.348402 1136061 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key
	I1212 00:27:45.348445 1136061 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key
	I1212 00:27:45.348517 1136061 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.key
	I1212 00:27:45.348571 1136061 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/apiserver.key.dd3b5fb2
	I1212 00:27:45.348616 1136061 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/proxy-client.key
	I1212 00:27:45.348731 1136061 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem (1338 bytes)
	W1212 00:27:45.348756 1136061 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383_empty.pem, impossibly tiny 0 bytes
	I1212 00:27:45.348767 1136061 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:27:45.348801 1136061 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:27:45.348823 1136061 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:27:45.348844 1136061 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem (1679 bytes)
	I1212 00:27:45.348893 1136061 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 00:27:45.349565 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:27:45.377851 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:27:45.404863 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:27:45.432332 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:27:45.459598 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:27:45.487102 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:27:45.514545 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:27:45.541091 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:27:45.568200 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:27:45.597013 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem --> /usr/share/ca-certificates/1117383.pem (1338 bytes)
	I1212 00:27:45.624171 1136061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /usr/share/ca-certificates/11173832.pem (1708 bytes)
	I1212 00:27:45.651160 1136061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:27:45.671191 1136061 ssh_runner.go:195] Run: openssl version
	I1212 00:27:45.678021 1136061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11173832.pem && ln -fs /usr/share/ca-certificates/11173832.pem /etc/ssl/certs/11173832.pem"
	I1212 00:27:45.689311 1136061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11173832.pem
	I1212 00:27:45.693655 1136061 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:25 /usr/share/ca-certificates/11173832.pem
	I1212 00:27:45.693707 1136061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11173832.pem
	I1212 00:27:45.705813 1136061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11173832.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:27:45.716578 1136061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:27:45.727956 1136061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:27:45.732655 1136061 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:27:45.732707 1136061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:27:45.740936 1136061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:27:45.751242 1136061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117383.pem && ln -fs /usr/share/ca-certificates/1117383.pem /etc/ssl/certs/1117383.pem"
	I1212 00:27:45.762582 1136061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117383.pem
	I1212 00:27:45.767088 1136061 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:25 /usr/share/ca-certificates/1117383.pem
	I1212 00:27:45.767139 1136061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117383.pem
	I1212 00:27:45.775675 1136061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117383.pem /etc/ssl/certs/51391683.0"
	I1212 00:27:45.786219 1136061 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:27:45.790667 1136061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:27:45.799276 1136061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:27:45.807797 1136061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:27:45.816157 1136061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:27:45.824247 1136061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:27:45.832521 1136061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:27:45.840841 1136061 kubeadm.go:404] StartCluster: {Name:functional-885247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-885247 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:27:45.840921 1136061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:27:45.840978 1136061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:27:45.886724 1136061 cri.go:89] found id: "3eebb53719b044ff0f8bfa01642c6aa7d668c9954c4fcd8d3ee434c139de9981"
	I1212 00:27:45.886737 1136061 cri.go:89] found id: "9cd722191fc338aac7a89c2d00db5070b6a648ff832576463b971e453ab713e1"
	I1212 00:27:45.886741 1136061 cri.go:89] found id: "2c54ff2857a72c4b4ea328dd0091cb770a0fab5c5afa5e9aefed96355b2b5ad4"
	I1212 00:27:45.886744 1136061 cri.go:89] found id: "6b9b0b97d1c05851d668286a34fcf837e4ae49285783665afaee0ae278d58028"
	I1212 00:27:45.886748 1136061 cri.go:89] found id: "75a71101cb29c1bc16dd578763bae25ef2bcd63c096a709c4c4eb0a18733ca21"
	I1212 00:27:45.886751 1136061 cri.go:89] found id: "6ba90d5c93c882b7c101e94fc79912cb65497daa618adc31a3ff5b5d57779dc6"
	I1212 00:27:45.886755 1136061 cri.go:89] found id: "d2fe0dff992ba2c20f40895b00210ceeaca8893cbb7bbfdcc7269f5ca8b8aa06"
	I1212 00:27:45.886758 1136061 cri.go:89] found id: "75cacdd655892e4f37d19930227122ee9232273b9a2142a2d978b0b4aa0b6224"
	I1212 00:27:45.886761 1136061 cri.go:89] found id: ""
	I1212 00:27:45.886810 1136061 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:27:45.912437 1136061 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"2c54ff2857a72c4b4ea328dd0091cb770a0fab5c5afa5e9aefed96355b2b5ad4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/2c54ff2857a72c4b4ea328dd0091cb770a0fab5c5afa5e9aefed96355b2b5ad4/userdata","rootfs":"/var/lib/containers/storage/overlay/6dde9c036de0ef07931b198c6098847f19cf1f1d25a743e5db476e9a6b228f0c/merged","created":"2023-12-12T00:27:08.681306221Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"13a375b3","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"13a375b3\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMe
ssagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2c54ff2857a72c4b4ea328dd0091cb770a0fab5c5afa5e9aefed96355b2b5ad4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:27:08.570511222Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-q94g5\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1dec636d-10c8-4b12-a0db-e75e06404b73\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-q94g5_1dec636d-10c8-4b12-a0db-e75e06404b73/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-
o.MountPoint":"/var/lib/containers/storage/overlay/6dde9c036de0ef07931b198c6098847f19cf1f1d25a743e5db476e9a6b228f0c/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-q94g5_kube-system_1dec636d-10c8-4b12-a0db-e75e06404b73_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/dee35c14ebcd11af131c5c6a21ae6cccf1f25cdf304612b0cdba561f96756499/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dee35c14ebcd11af131c5c6a21ae6cccf1f25cdf304612b0cdba561f96756499","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-q94g5_kube-system_1dec636d-10c8-4b12-a0db-e75e06404b73_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propag
ation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1dec636d-10c8-4b12-a0db-e75e06404b73/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1dec636d-10c8-4b12-a0db-e75e06404b73/containers/kindnet-cni/98710905\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1dec636d-10c8-4b12-a0db-e75e06404b73/volumes/kubernetes.io~projected/kube-api-access-xbxbj\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-q94g5","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1dec636d-10c8-4b12-a0db
-e75e06404b73","kubernetes.io/config.seen":"2023-12-12T00:26:19.017516727Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3eebb53719b044ff0f8bfa01642c6aa7d668c9954c4fcd8d3ee434c139de9981","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3eebb53719b044ff0f8bfa01642c6aa7d668c9954c4fcd8d3ee434c139de9981/userdata","rootfs":"/var/lib/containers/storage/overlay/00ebf563cf7dc217eb0d671c27744ff4f929029f41df272105feef5ae66ac001/merged","created":"2023-12-12T00:27:08.693557846Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3b67b46f","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kub
ernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3b67b46f\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3eebb53719b044ff0f8bfa01642c6aa7d668c9954c4fcd8d3ee434c139de9981","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:27:08.603963968Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernet
es.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-hfstc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1887835a-db28-446b-9a35-801282264ada\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-hfstc_1887835a-db28-446b-9a35-801282264ada/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/00ebf563cf7dc217eb0d671c27744ff4f929029f41df272105feef5ae66ac001/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-hfstc_kube-system_1887835a-db28-446b-9a35-801282264ada_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c50523ec72897d015d0c6d1b0c872d632b2d52b31a4345eed65ecb54088290b4/userdata/
resolv.conf","io.kubernetes.cri-o.SandboxID":"c50523ec72897d015d0c6d1b0c872d632b2d52b31a4345eed65ecb54088290b4","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-hfstc_kube-system_1887835a-db28-446b-9a35-801282264ada_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/1887835a-db28-446b-9a35-801282264ada/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1887835a-db28-446b-9a35-801282264ada/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1887835a-db28-446b-9a35-801282264ada/containers/coredns/c8f346a7\",\"readonly\":false,\"propagation\":0,\"selinux_rela
bel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1887835a-db28-446b-9a35-801282264ada/volumes/kubernetes.io~projected/kube-api-access-hj9wk\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-hfstc","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1887835a-db28-446b-9a35-801282264ada","kubernetes.io/config.seen":"2023-12-12T00:26:50.684232583Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6b9b0b97d1c05851d668286a34fcf837e4ae49285783665afaee0ae278d58028","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6b9b0b97d1c05851d668286a34fcf837e4ae49285783665afaee0ae278d58028/userdata","rootfs":"/var/lib/containers/storage/overlay/743474c72c645731d66836e785d15d7bdaf7f0c79c7fc3e0798611938bb13910/merged","created":"2023-12-12T00:27:08.698983835Z","annotati
ons":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e1639c7a","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e1639c7a\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6b9b0b97d1c05851d668286a34fcf837e4ae49285783665afaee0ae278d58028","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:27:08.531742343Z","io.kubernetes.cri-o.Image":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri-o.Im
ageRef":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-885247\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"736c868da296e815b5dd122ae9441e02\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-885247_736c868da296e815b5dd122ae9441e02/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/743474c72c645731d66836e785d15d7bdaf7f0c79c7fc3e0798611938bb13910/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-885247_kube-system_736c868da296e815b5dd122ae9441e02_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7126037a44d56acbb8c0756119a868b44d30b456f9b041548040f31eac424284/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":
"7126037a44d56acbb8c0756119a868b44d30b456f9b041548040f31eac424284","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-885247_kube-system_736c868da296e815b5dd122ae9441e02_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/736c868da296e815b5dd122ae9441e02/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/736c868da296e815b5dd122ae9441e02/containers/kube-scheduler/954183bc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-885247","io.kubernetes.pod.name
space":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"736c868da296e815b5dd122ae9441e02","kubernetes.io/config.hash":"736c868da296e815b5dd122ae9441e02","kubernetes.io/config.seen":"2023-12-12T00:25:58.862993059Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6ba90d5c93c882b7c101e94fc79912cb65497daa618adc31a3ff5b5d57779dc6","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6ba90d5c93c882b7c101e94fc79912cb65497daa618adc31a3ff5b5d57779dc6/userdata","rootfs":"/var/lib/containers/storage/overlay/e0789b432ff123f0d0e5c03e8090cfcd34ba445641d82758c7034ab8c5f35cbf/merged","created":"2023-12-12T00:27:08.691044558Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b60ddd3e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessage
Policy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b60ddd3e\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6ba90d5c93c882b7c101e94fc79912cb65497daa618adc31a3ff5b5d57779dc6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:27:08.478711608Z","io.kubernetes.cri-o.Image":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri-o.ImageRef":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-885247\",\"io.kubernetes.pod.namespace\":\"ku
be-system\",\"io.kubernetes.pod.uid\":\"d3a290d9df55f7bd7f9d2fd79af081cb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-885247_d3a290d9df55f7bd7f9d2fd79af081cb/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e0789b432ff123f0d0e5c03e8090cfcd34ba445641d82758c7034ab8c5f35cbf/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-885247_kube-system_d3a290d9df55f7bd7f9d2fd79af081cb_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1a6ccf55d49b44650667c00b79aae9157c3f4c28ffd5b6c581d4cd790168b434/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1a6ccf55d49b44650667c00b79aae9157c3f4c28ffd5b6c581d4cd790168b434","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-885247_kube-system_d3a290d9df55f7bd7f9d2fd79af081cb_0","io.kubernetes.cr
i-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d3a290d9df55f7bd7f9d2fd79af081cb/containers/kube-controller-manager/c747de5a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d3a290d9df55f7bd7f9d2fd79af081cb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selin
ux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-885247","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d3a290d9df55f7bd7f9d2fd79af081cb","kubernetes.io/config.hash":"d3a290d9df55f7bd7f9d2fd79af081cb","kubernetes.io/config.seen
":"2023-12-12T00:25:58.862991984Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"75a71101cb29c1bc16dd578763bae25ef2bcd63c096a709c4c4eb0a18733ca21","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/75a71101cb29c1bc16dd578763bae25ef2bcd63c096a709c4c4eb0a18733ca21/userdata","rootfs":"/var/lib/containers/storage/overlay/e2c11fe61626066d3cd8a30d822568cb046394f097bb702fc7a44794891eb7b7/merged","created":"2023-12-12T00:27:08.668148374Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"55ae7856","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"55ae7856\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.k
ubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"75a71101cb29c1bc16dd578763bae25ef2bcd63c096a709c4c4eb0a18733ca21","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:27:08.514526106Z","io.kubernetes.cri-o.Image":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri-o.ImageRef":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-885247\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"49bc24072f63cc7612bd5c285c2fac83\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-885247_49bc24072f63cc7612bd5c285c2fac83/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":
"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e2c11fe61626066d3cd8a30d822568cb046394f097bb702fc7a44794891eb7b7/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-885247_kube-system_49bc24072f63cc7612bd5c285c2fac83_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/15cc41a8d32b2202b0c75b2bcc0bdff525d762e63e012ca6741e8399801077be/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"15cc41a8d32b2202b0c75b2bcc0bdff525d762e63e012ca6741e8399801077be","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-885247_kube-system_49bc24072f63cc7612bd5c285c2fac83_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/49bc24072f63cc7612bd5c285c2fac83/containers/kube-apiser
ver/5616e7b2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/49bc24072f63cc7612bd5c285c2fac83/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false
}]","io.kubernetes.pod.name":"kube-apiserver-functional-885247","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"49bc24072f63cc7612bd5c285c2fac83","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"49bc24072f63cc7612bd5c285c2fac83","kubernetes.io/config.seen":"2023-12-12T00:25:58.862990704Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"75cacdd655892e4f37d19930227122ee9232273b9a2142a2d978b0b4aa0b6224","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/75cacdd655892e4f37d19930227122ee9232273b9a2142a2d978b0b4aa0b6224/userdata","rootfs":"/var/lib/containers/storage/overlay/5ee8636be030e436f27eac70cfa21e8b8a26ace0d43901f18ec82e9907258cbf/merged","created":"2023-12-12T00:27:08.683214759Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"23304df","io.kubernetes.container.name":"etcd","io.k
ubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"23304df\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"75cacdd655892e4f37d19930227122ee9232273b9a2142a2d978b0b4aa0b6224","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:27:08.44589827Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etc
d\",\"io.kubernetes.pod.name\":\"etcd-functional-885247\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0e48134f7fc66272b6995c5ca9a9da7c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-885247_0e48134f7fc66272b6995c5ca9a9da7c/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5ee8636be030e436f27eac70cfa21e8b8a26ace0d43901f18ec82e9907258cbf/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-885247_kube-system_0e48134f7fc66272b6995c5ca9a9da7c_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/530ead106ebfb33feb9fe5b290e01aecd93d4460559e1bc41e8fe64f5d1ff342/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"530ead106ebfb33feb9fe5b290e01aecd93d4460559e1bc41e8fe64f5d1ff342","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-885247_kube-system_0e48134f7fc66272b6995c5ca9a9da7c_0","io.kubernetes.cri-o.SeccompProfilePa
th":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0e48134f7fc66272b6995c5ca9a9da7c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0e48134f7fc66272b6995c5ca9a9da7c/containers/etcd/5b06cf24\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-885247","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0
e48134f7fc66272b6995c5ca9a9da7c","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"0e48134f7fc66272b6995c5ca9a9da7c","kubernetes.io/config.seen":"2023-12-12T00:25:58.862984534Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9cd722191fc338aac7a89c2d00db5070b6a648ff832576463b971e453ab713e1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9cd722191fc338aac7a89c2d00db5070b6a648ff832576463b971e453ab713e1/userdata","rootfs":"/var/lib/containers/storage/overlay/d603be8ab140adf27bbee0761060297fed532f1f7bba80eb216c6bc435603346/merged","created":"2023-12-12T00:27:08.768904581Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"857b13d","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.
cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"857b13d\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9cd722191fc338aac7a89c2d00db5070b6a648ff832576463b971e453ab713e1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:27:08.595632366Z","io.kubernetes.cri-o.Image":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri-o.ImageRef":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-ls4xf\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b3a9b70b-4d04-4fc1-8cb8-24594a551772\"}","
io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-ls4xf_b3a9b70b-4d04-4fc1-8cb8-24594a551772/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d603be8ab140adf27bbee0761060297fed532f1f7bba80eb216c6bc435603346/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-ls4xf_kube-system_b3a9b70b-4d04-4fc1-8cb8-24594a551772_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/69a71f2200248dc75ccd332c7ca444b5b8b9b2a5d271181f50dc8e2d89658caf/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"69a71f2200248dc75ccd332c7ca444b5b8b9b2a5d271181f50dc8e2d89658caf","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-ls4xf_kube-system_b3a9b70b-4d04-4fc1-8cb8-24594a551772_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"containe
r_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b3a9b70b-4d04-4fc1-8cb8-24594a551772/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b3a9b70b-4d04-4fc1-8cb8-24594a551772/containers/kube-proxy/c963852b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/b3a9b70b-4d04-4fc1-8cb8-24594a551772/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/b3a9b70b-4d04-4fc1-8cb8-
24594a551772/volumes/kubernetes.io~projected/kube-api-access-9mg7q\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-ls4xf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b3a9b70b-4d04-4fc1-8cb8-24594a551772","kubernetes.io/config.seen":"2023-12-12T00:26:18.996976872Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d2fe0dff992ba2c20f40895b00210ceeaca8893cbb7bbfdcc7269f5ca8b8aa06","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d2fe0dff992ba2c20f40895b00210ceeaca8893cbb7bbfdcc7269f5ca8b8aa06/userdata","rootfs":"/var/lib/containers/storage/overlay/97b664b8788c967ab7aa0121208c716648f86d7198dbfb70ab4cd47ea1222dc1/merged","created":"2023-12-12T00:27:08.572963736Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5cfaa472","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.contai
ner.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5cfaa472\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d2fe0dff992ba2c20f40895b00210ceeaca8893cbb7bbfdcc7269f5ca8b8aa06","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:27:08.471062981Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"sto
rage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ee03670d-7b8a-47cc-91b0-4f1e23b5629c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_ee03670d-7b8a-47cc-91b0-4f1e23b5629c/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/97b664b8788c967ab7aa0121208c716648f86d7198dbfb70ab4cd47ea1222dc1/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_ee03670d-7b8a-47cc-91b0-4f1e23b5629c_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/09c279953d33e424959fc8b620ddbc8dcf2d5004d32ae3b1c2e45e0de4c50a63/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"09c279953d33e424959fc8b620ddbc8dcf2d5004d32ae3b1c2e45e0de4c50a63","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_ee03670d-7b8a
-47cc-91b0-4f1e23b5629c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ee03670d-7b8a-47cc-91b0-4f1e23b5629c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ee03670d-7b8a-47cc-91b0-4f1e23b5629c/containers/storage-provisioner/aad18cb5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/ee03670d-7b8a-47cc-91b0-4f1e23b5629c/volumes/kubernetes.io~projected/kube-api-access-xsdpl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.po
d.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ee03670d-7b8a-47cc-91b0-4f1e23b5629c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-12-12T00:26:50.685671211Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I1212 00:27:45.913035 1136061 cri.go:126] list returned 8 containers
	I1212 00:27:45.913043 1136061 cri.go:129] container: {ID:2c54ff2857a72c4b4ea328dd0091cb770a0fab5c5afa5e9aefed96355b2b5ad4 Status:stopped}
	I1212 00:27:45.913056 1136061 cri.go:135] skipping {2c54ff2857a72c4b4ea328dd0091cb770a0fab5c5afa5e9aefed96355b2b5ad4 stopped}: state = "stopped", want "paused"
	I1212 00:27:45.913064 1136061 cri.go:129] container: {ID:3eebb53719b044ff0f8bfa01642c6aa7d668c9954c4fcd8d3ee434c139de9981 Status:stopped}
	I1212 00:27:45.913071 1136061 cri.go:135] skipping {3eebb53719b044ff0f8bfa01642c6aa7d668c9954c4fcd8d3ee434c139de9981 stopped}: state = "stopped", want "paused"
	I1212 00:27:45.913077 1136061 cri.go:129] container: {ID:6b9b0b97d1c05851d668286a34fcf837e4ae49285783665afaee0ae278d58028 Status:stopped}
	I1212 00:27:45.913083 1136061 cri.go:135] skipping {6b9b0b97d1c05851d668286a34fcf837e4ae49285783665afaee0ae278d58028 stopped}: state = "stopped", want "paused"
	I1212 00:27:45.913088 1136061 cri.go:129] container: {ID:6ba90d5c93c882b7c101e94fc79912cb65497daa618adc31a3ff5b5d57779dc6 Status:stopped}
	I1212 00:27:45.913093 1136061 cri.go:135] skipping {6ba90d5c93c882b7c101e94fc79912cb65497daa618adc31a3ff5b5d57779dc6 stopped}: state = "stopped", want "paused"
	I1212 00:27:45.913098 1136061 cri.go:129] container: {ID:75a71101cb29c1bc16dd578763bae25ef2bcd63c096a709c4c4eb0a18733ca21 Status:stopped}
	I1212 00:27:45.913104 1136061 cri.go:135] skipping {75a71101cb29c1bc16dd578763bae25ef2bcd63c096a709c4c4eb0a18733ca21 stopped}: state = "stopped", want "paused"
	I1212 00:27:45.913109 1136061 cri.go:129] container: {ID:75cacdd655892e4f37d19930227122ee9232273b9a2142a2d978b0b4aa0b6224 Status:stopped}
	I1212 00:27:45.913115 1136061 cri.go:135] skipping {75cacdd655892e4f37d19930227122ee9232273b9a2142a2d978b0b4aa0b6224 stopped}: state = "stopped", want "paused"
	I1212 00:27:45.913122 1136061 cri.go:129] container: {ID:9cd722191fc338aac7a89c2d00db5070b6a648ff832576463b971e453ab713e1 Status:stopped}
	I1212 00:27:45.913128 1136061 cri.go:135] skipping {9cd722191fc338aac7a89c2d00db5070b6a648ff832576463b971e453ab713e1 stopped}: state = "stopped", want "paused"
	I1212 00:27:45.913133 1136061 cri.go:129] container: {ID:d2fe0dff992ba2c20f40895b00210ceeaca8893cbb7bbfdcc7269f5ca8b8aa06 Status:stopped}
	I1212 00:27:45.913138 1136061 cri.go:135] skipping {d2fe0dff992ba2c20f40895b00210ceeaca8893cbb7bbfdcc7269f5ca8b8aa06 stopped}: state = "stopped", want "paused"
	I1212 00:27:45.913193 1136061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:27:45.923754 1136061 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 00:27:45.923766 1136061 kubeadm.go:636] restartCluster start
	I1212 00:27:45.923824 1136061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:27:45.933968 1136061 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:27:45.934493 1136061 kubeconfig.go:92] found "functional-885247" server: "https://192.168.49.2:8441"
	I1212 00:27:45.936218 1136061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:27:45.946787 1136061 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-12-12 00:25:50.000662855 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-12-12 00:27:45.337249923 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1212 00:27:45.946795 1136061 kubeadm.go:1135] stopping kube-system containers ...
	I1212 00:27:45.946805 1136061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 00:27:45.946865 1136061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:27:45.987391 1136061 cri.go:89] found id: "3eebb53719b044ff0f8bfa01642c6aa7d668c9954c4fcd8d3ee434c139de9981"
	I1212 00:27:45.987404 1136061 cri.go:89] found id: "9cd722191fc338aac7a89c2d00db5070b6a648ff832576463b971e453ab713e1"
	I1212 00:27:45.987409 1136061 cri.go:89] found id: "2c54ff2857a72c4b4ea328dd0091cb770a0fab5c5afa5e9aefed96355b2b5ad4"
	I1212 00:27:45.987413 1136061 cri.go:89] found id: "6b9b0b97d1c05851d668286a34fcf837e4ae49285783665afaee0ae278d58028"
	I1212 00:27:45.987416 1136061 cri.go:89] found id: "75a71101cb29c1bc16dd578763bae25ef2bcd63c096a709c4c4eb0a18733ca21"
	I1212 00:27:45.987419 1136061 cri.go:89] found id: "6ba90d5c93c882b7c101e94fc79912cb65497daa618adc31a3ff5b5d57779dc6"
	I1212 00:27:45.987422 1136061 cri.go:89] found id: "d2fe0dff992ba2c20f40895b00210ceeaca8893cbb7bbfdcc7269f5ca8b8aa06"
	I1212 00:27:45.987425 1136061 cri.go:89] found id: "75cacdd655892e4f37d19930227122ee9232273b9a2142a2d978b0b4aa0b6224"
	I1212 00:27:45.987428 1136061 cri.go:89] found id: ""
	I1212 00:27:45.987432 1136061 cri.go:234] Stopping containers: [3eebb53719b044ff0f8bfa01642c6aa7d668c9954c4fcd8d3ee434c139de9981 9cd722191fc338aac7a89c2d00db5070b6a648ff832576463b971e453ab713e1 2c54ff2857a72c4b4ea328dd0091cb770a0fab5c5afa5e9aefed96355b2b5ad4 6b9b0b97d1c05851d668286a34fcf837e4ae49285783665afaee0ae278d58028 75a71101cb29c1bc16dd578763bae25ef2bcd63c096a709c4c4eb0a18733ca21 6ba90d5c93c882b7c101e94fc79912cb65497daa618adc31a3ff5b5d57779dc6 d2fe0dff992ba2c20f40895b00210ceeaca8893cbb7bbfdcc7269f5ca8b8aa06 75cacdd655892e4f37d19930227122ee9232273b9a2142a2d978b0b4aa0b6224]
	I1212 00:27:45.987485 1136061 ssh_runner.go:195] Run: which crictl
	I1212 00:27:45.991901 1136061 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 3eebb53719b044ff0f8bfa01642c6aa7d668c9954c4fcd8d3ee434c139de9981 9cd722191fc338aac7a89c2d00db5070b6a648ff832576463b971e453ab713e1 2c54ff2857a72c4b4ea328dd0091cb770a0fab5c5afa5e9aefed96355b2b5ad4 6b9b0b97d1c05851d668286a34fcf837e4ae49285783665afaee0ae278d58028 75a71101cb29c1bc16dd578763bae25ef2bcd63c096a709c4c4eb0a18733ca21 6ba90d5c93c882b7c101e94fc79912cb65497daa618adc31a3ff5b5d57779dc6 d2fe0dff992ba2c20f40895b00210ceeaca8893cbb7bbfdcc7269f5ca8b8aa06 75cacdd655892e4f37d19930227122ee9232273b9a2142a2d978b0b4aa0b6224
	I1212 00:27:46.057560 1136061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:27:46.156370 1136061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:27:46.167126 1136061 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Dec 12 00:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 12 00:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Dec 12 00:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 12 00:25 /etc/kubernetes/scheduler.conf
	
	I1212 00:27:46.167196 1136061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:27:46.177657 1136061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:27:46.188209 1136061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:27:46.198545 1136061 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:27:46.198600 1136061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:27:46.208494 1136061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:27:46.218874 1136061 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:27:46.218935 1136061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:27:46.228859 1136061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:27:46.239299 1136061 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 00:27:46.239313 1136061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:27:46.299761 1136061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:27:48.345015 1136061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.045229648s)
	I1212 00:27:48.345044 1136061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:27:48.544576 1136061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:27:48.627860 1136061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:27:48.705189 1136061 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:27:48.705296 1136061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:27:48.721210 1136061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:27:49.247873 1136061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:27:49.747255 1136061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:27:49.774251 1136061 api_server.go:72] duration metric: took 1.069069581s to wait for apiserver process to appear ...
	I1212 00:27:49.774266 1136061 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:27:49.774287 1136061 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1212 00:27:53.421916 1136061 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:27:53.421934 1136061 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:27:53.421944 1136061 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1212 00:27:53.556533 1136061 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:27:53.556551 1136061 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:27:54.056907 1136061 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1212 00:27:54.112408 1136061 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 00:27:54.112426 1136061 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 00:27:54.556956 1136061 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1212 00:27:54.566252 1136061 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1212 00:27:54.581827 1136061 api_server.go:141] control plane version: v1.28.4
	I1212 00:27:54.581844 1136061 api_server.go:131] duration metric: took 4.807572793s to wait for apiserver health ...
	I1212 00:27:54.581852 1136061 cni.go:84] Creating CNI manager for ""
	I1212 00:27:54.581857 1136061 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:27:54.584608 1136061 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:27:54.586740 1136061 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:27:54.592006 1136061 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:27:54.592017 1136061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:27:54.614539 1136061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:27:55.448687 1136061 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:27:55.457331 1136061 system_pods.go:59] 8 kube-system pods found
	I1212 00:27:55.457352 1136061 system_pods.go:61] "coredns-5dd5756b68-hfstc" [1887835a-db28-446b-9a35-801282264ada] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:27:55.457359 1136061 system_pods.go:61] "etcd-functional-885247" [6e8cb691-81e6-4c0a-8114-45392b4c1333] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:27:55.457364 1136061 system_pods.go:61] "kindnet-q94g5" [1dec636d-10c8-4b12-a0db-e75e06404b73] Running
	I1212 00:27:55.457371 1136061 system_pods.go:61] "kube-apiserver-functional-885247" [7d12e440-bf20-46ce-94a9-dddfeb327639] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:27:55.457378 1136061 system_pods.go:61] "kube-controller-manager-functional-885247" [c8b73cf4-857b-405b-98dd-3072d885c308] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:27:55.457383 1136061 system_pods.go:61] "kube-proxy-ls4xf" [b3a9b70b-4d04-4fc1-8cb8-24594a551772] Running
	I1212 00:27:55.457390 1136061 system_pods.go:61] "kube-scheduler-functional-885247" [c58eb37b-854a-401e-a478-4003e79d06a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:27:55.457396 1136061 system_pods.go:61] "storage-provisioner" [ee03670d-7b8a-47cc-91b0-4f1e23b5629c] Running
	I1212 00:27:55.457401 1136061 system_pods.go:74] duration metric: took 8.703406ms to wait for pod list to return data ...
	I1212 00:27:55.457407 1136061 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:27:55.460994 1136061 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:27:55.461016 1136061 node_conditions.go:123] node cpu capacity is 2
	I1212 00:27:55.461025 1136061 node_conditions.go:105] duration metric: took 3.614596ms to run NodePressure ...
	I1212 00:27:55.461044 1136061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:27:55.725521 1136061 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 00:27:55.736134 1136061 kubeadm.go:787] kubelet initialised
	I1212 00:27:55.736144 1136061 kubeadm.go:788] duration metric: took 10.609877ms waiting for restarted kubelet to initialise ...
	I1212 00:27:55.736151 1136061 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:27:55.744342 1136061 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hfstc" in "kube-system" namespace to be "Ready" ...
	I1212 00:27:57.762437 1136061 pod_ready.go:102] pod "coredns-5dd5756b68-hfstc" in "kube-system" namespace has status "Ready":"False"
	I1212 00:27:59.762557 1136061 pod_ready.go:102] pod "coredns-5dd5756b68-hfstc" in "kube-system" namespace has status "Ready":"False"
	I1212 00:28:00.762542 1136061 pod_ready.go:92] pod "coredns-5dd5756b68-hfstc" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:00.762553 1136061 pod_ready.go:81] duration metric: took 5.018197111s waiting for pod "coredns-5dd5756b68-hfstc" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:00.762564 1136061 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:01.280222 1136061 pod_ready.go:92] pod "etcd-functional-885247" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:01.280234 1136061 pod_ready.go:81] duration metric: took 517.663832ms waiting for pod "etcd-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:01.280246 1136061 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:01.285463 1136061 pod_ready.go:92] pod "kube-apiserver-functional-885247" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:01.285478 1136061 pod_ready.go:81] duration metric: took 5.221228ms waiting for pod "kube-apiserver-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:01.285487 1136061 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:01.291006 1136061 pod_ready.go:92] pod "kube-controller-manager-functional-885247" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:01.291016 1136061 pod_ready.go:81] duration metric: took 5.523357ms waiting for pod "kube-controller-manager-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:01.291027 1136061 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ls4xf" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:01.560798 1136061 pod_ready.go:92] pod "kube-proxy-ls4xf" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:01.560809 1136061 pod_ready.go:81] duration metric: took 269.776905ms waiting for pod "kube-proxy-ls4xf" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:01.560819 1136061 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:03.866313 1136061 pod_ready.go:102] pod "kube-scheduler-functional-885247" in "kube-system" namespace has status "Ready":"False"
	I1212 00:28:04.367403 1136061 pod_ready.go:92] pod "kube-scheduler-functional-885247" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:04.367415 1136061 pod_ready.go:81] duration metric: took 2.806589329s waiting for pod "kube-scheduler-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:04.367425 1136061 pod_ready.go:38] duration metric: took 8.6312659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:28:04.367438 1136061 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:28:04.377535 1136061 ops.go:34] apiserver oom_adj: -16
	I1212 00:28:04.377559 1136061 kubeadm.go:640] restartCluster took 18.453787766s
	I1212 00:28:04.377566 1136061 kubeadm.go:406] StartCluster complete in 18.536735277s
	I1212 00:28:04.377580 1136061 settings.go:142] acquiring lock: {Name:mk4639df610f4394c6679c82a1803a108086063e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:28:04.377664 1136061 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:28:04.378451 1136061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/kubeconfig: {Name:mk6bda1f8356012618f11e41d531a3f786e443d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:28:04.378676 1136061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:28:04.378948 1136061 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:28:04.379084 1136061 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 00:28:04.379143 1136061 addons.go:69] Setting storage-provisioner=true in profile "functional-885247"
	I1212 00:28:04.379157 1136061 addons.go:231] Setting addon storage-provisioner=true in "functional-885247"
	W1212 00:28:04.379162 1136061 addons.go:240] addon storage-provisioner should already be in state true
	I1212 00:28:04.379210 1136061 host.go:66] Checking if "functional-885247" exists ...
	I1212 00:28:04.379551 1136061 addons.go:69] Setting default-storageclass=true in profile "functional-885247"
	I1212 00:28:04.379566 1136061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-885247"
	I1212 00:28:04.379662 1136061 cli_runner.go:164] Run: docker container inspect functional-885247 --format={{.State.Status}}
	I1212 00:28:04.379833 1136061 cli_runner.go:164] Run: docker container inspect functional-885247 --format={{.State.Status}}
	I1212 00:28:04.387501 1136061 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-885247" context rescaled to 1 replicas
	I1212 00:28:04.387535 1136061 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:28:04.395514 1136061 out.go:177] * Verifying Kubernetes components...
	I1212 00:28:04.397572 1136061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:28:04.428342 1136061 addons.go:231] Setting addon default-storageclass=true in "functional-885247"
	W1212 00:28:04.428353 1136061 addons.go:240] addon default-storageclass should already be in state true
	I1212 00:28:04.428376 1136061 host.go:66] Checking if "functional-885247" exists ...
	I1212 00:28:04.428777 1136061 cli_runner.go:164] Run: docker container inspect functional-885247 --format={{.State.Status}}
	I1212 00:28:04.430903 1136061 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:28:04.437898 1136061 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:28:04.437915 1136061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:28:04.438003 1136061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
	I1212 00:28:04.452443 1136061 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:28:04.452455 1136061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:28:04.452520 1136061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
	I1212 00:28:04.485972 1136061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
	I1212 00:28:04.497391 1136061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
	I1212 00:28:04.552126 1136061 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 00:28:04.552152 1136061 node_ready.go:35] waiting up to 6m0s for node "functional-885247" to be "Ready" ...
	I1212 00:28:04.555818 1136061 node_ready.go:49] node "functional-885247" has status "Ready":"True"
	I1212 00:28:04.555828 1136061 node_ready.go:38] duration metric: took 3.660536ms waiting for node "functional-885247" to be "Ready" ...
	I1212 00:28:04.555836 1136061 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:28:04.571286 1136061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hfstc" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:04.635917 1136061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:28:04.654033 1136061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:28:04.765820 1136061 pod_ready.go:92] pod "coredns-5dd5756b68-hfstc" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:04.765832 1136061 pod_ready.go:81] duration metric: took 194.525691ms waiting for pod "coredns-5dd5756b68-hfstc" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:04.765842 1136061 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:05.147689 1136061 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:28:05.149575 1136061 addons.go:502] enable addons completed in 770.508495ms: enabled=[storage-provisioner default-storageclass]
	I1212 00:28:05.159570 1136061 pod_ready.go:92] pod "etcd-functional-885247" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:05.159582 1136061 pod_ready.go:81] duration metric: took 393.733082ms waiting for pod "etcd-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:05.159595 1136061 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:05.559825 1136061 pod_ready.go:92] pod "kube-apiserver-functional-885247" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:05.559837 1136061 pod_ready.go:81] duration metric: took 400.235585ms waiting for pod "kube-apiserver-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:05.559847 1136061 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:05.959697 1136061 pod_ready.go:92] pod "kube-controller-manager-functional-885247" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:05.959717 1136061 pod_ready.go:81] duration metric: took 399.854336ms waiting for pod "kube-controller-manager-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:05.959728 1136061 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ls4xf" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:06.360358 1136061 pod_ready.go:92] pod "kube-proxy-ls4xf" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:06.360369 1136061 pod_ready.go:81] duration metric: took 400.63522ms waiting for pod "kube-proxy-ls4xf" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:06.360378 1136061 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:06.760022 1136061 pod_ready.go:92] pod "kube-scheduler-functional-885247" in "kube-system" namespace has status "Ready":"True"
	I1212 00:28:06.760033 1136061 pod_ready.go:81] duration metric: took 399.649042ms waiting for pod "kube-scheduler-functional-885247" in "kube-system" namespace to be "Ready" ...
	I1212 00:28:06.760044 1136061 pod_ready.go:38] duration metric: took 2.204200414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:28:06.760057 1136061 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:28:06.760130 1136061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:28:06.773353 1136061 api_server.go:72] duration metric: took 2.385788535s to wait for apiserver process to appear ...
	I1212 00:28:06.773368 1136061 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:28:06.773384 1136061 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1212 00:28:06.783252 1136061 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1212 00:28:06.784550 1136061 api_server.go:141] control plane version: v1.28.4
	I1212 00:28:06.784562 1136061 api_server.go:131] duration metric: took 11.189437ms to wait for apiserver health ...
	I1212 00:28:06.784569 1136061 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:28:06.963892 1136061 system_pods.go:59] 8 kube-system pods found
	I1212 00:28:06.963907 1136061 system_pods.go:61] "coredns-5dd5756b68-hfstc" [1887835a-db28-446b-9a35-801282264ada] Running
	I1212 00:28:06.963912 1136061 system_pods.go:61] "etcd-functional-885247" [6e8cb691-81e6-4c0a-8114-45392b4c1333] Running
	I1212 00:28:06.963917 1136061 system_pods.go:61] "kindnet-q94g5" [1dec636d-10c8-4b12-a0db-e75e06404b73] Running
	I1212 00:28:06.963921 1136061 system_pods.go:61] "kube-apiserver-functional-885247" [7d12e440-bf20-46ce-94a9-dddfeb327639] Running
	I1212 00:28:06.963926 1136061 system_pods.go:61] "kube-controller-manager-functional-885247" [c8b73cf4-857b-405b-98dd-3072d885c308] Running
	I1212 00:28:06.963930 1136061 system_pods.go:61] "kube-proxy-ls4xf" [b3a9b70b-4d04-4fc1-8cb8-24594a551772] Running
	I1212 00:28:06.963934 1136061 system_pods.go:61] "kube-scheduler-functional-885247" [c58eb37b-854a-401e-a478-4003e79d06a3] Running
	I1212 00:28:06.963938 1136061 system_pods.go:61] "storage-provisioner" [ee03670d-7b8a-47cc-91b0-4f1e23b5629c] Running
	I1212 00:28:06.963943 1136061 system_pods.go:74] duration metric: took 179.369019ms to wait for pod list to return data ...
	I1212 00:28:06.963949 1136061 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:28:07.160207 1136061 default_sa.go:45] found service account: "default"
	I1212 00:28:07.160220 1136061 default_sa.go:55] duration metric: took 196.26566ms for default service account to be created ...
	I1212 00:28:07.160228 1136061 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:28:07.362832 1136061 system_pods.go:86] 8 kube-system pods found
	I1212 00:28:07.362846 1136061 system_pods.go:89] "coredns-5dd5756b68-hfstc" [1887835a-db28-446b-9a35-801282264ada] Running
	I1212 00:28:07.362852 1136061 system_pods.go:89] "etcd-functional-885247" [6e8cb691-81e6-4c0a-8114-45392b4c1333] Running
	I1212 00:28:07.362856 1136061 system_pods.go:89] "kindnet-q94g5" [1dec636d-10c8-4b12-a0db-e75e06404b73] Running
	I1212 00:28:07.362861 1136061 system_pods.go:89] "kube-apiserver-functional-885247" [7d12e440-bf20-46ce-94a9-dddfeb327639] Running
	I1212 00:28:07.362866 1136061 system_pods.go:89] "kube-controller-manager-functional-885247" [c8b73cf4-857b-405b-98dd-3072d885c308] Running
	I1212 00:28:07.362870 1136061 system_pods.go:89] "kube-proxy-ls4xf" [b3a9b70b-4d04-4fc1-8cb8-24594a551772] Running
	I1212 00:28:07.362874 1136061 system_pods.go:89] "kube-scheduler-functional-885247" [c58eb37b-854a-401e-a478-4003e79d06a3] Running
	I1212 00:28:07.362878 1136061 system_pods.go:89] "storage-provisioner" [ee03670d-7b8a-47cc-91b0-4f1e23b5629c] Running
	I1212 00:28:07.362884 1136061 system_pods.go:126] duration metric: took 202.651695ms to wait for k8s-apps to be running ...
	I1212 00:28:07.362890 1136061 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:28:07.362947 1136061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:28:07.376431 1136061 system_svc.go:56] duration metric: took 13.527755ms WaitForService to wait for kubelet.
	I1212 00:28:07.376447 1136061 kubeadm.go:581] duration metric: took 2.988889971s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:28:07.376464 1136061 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:28:07.560054 1136061 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:28:07.560075 1136061 node_conditions.go:123] node cpu capacity is 2
	I1212 00:28:07.560084 1136061 node_conditions.go:105] duration metric: took 183.615934ms to run NodePressure ...
	I1212 00:28:07.560095 1136061 start.go:228] waiting for startup goroutines ...
	I1212 00:28:07.560101 1136061 start.go:233] waiting for cluster config update ...
	I1212 00:28:07.560114 1136061 start.go:242] writing updated cluster config ...
	I1212 00:28:07.560466 1136061 ssh_runner.go:195] Run: rm -f paused
	I1212 00:28:07.625994 1136061 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 00:28:07.628392 1136061 out.go:177] * Done! kubectl is now configured to use "functional-885247" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 12 00:28:51 functional-885247 crio[3896]: time="2023-12-12 00:28:51.300058266Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 12 00:28:51 functional-885247 crio[3896]: time="2023-12-12 00:28:51.920305412Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=06dcc990-6284-4a29-92ea-2901bdc5a874 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:28:51 functional-885247 crio[3896]: time="2023-12-12 00:28:51.920525663Z" level=info msg="Image docker.io/nginx:alpine not found" id=06dcc990-6284-4a29-92ea-2901bdc5a874 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:29:05 functional-885247 crio[3896]: time="2023-12-12 00:29:05.724552433Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b3257466-5859-4088-b01e-43f0e8b9b74e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:29:05 functional-885247 crio[3896]: time="2023-12-12 00:29:05.724779314Z" level=info msg="Image docker.io/nginx:alpine not found" id=b3257466-5859-4088-b01e-43f0e8b9b74e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:29:21 functional-885247 crio[3896]: time="2023-12-12 00:29:21.571563782Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=cd5333e5-53d9-4280-8487-221c2f274f9d name=/runtime.v1.ImageService/PullImage
	Dec 12 00:29:21 functional-885247 crio[3896]: time="2023-12-12 00:29:21.573882786Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 12 00:29:21 functional-885247 crio[3896]: time="2023-12-12 00:29:21.974194834Z" level=info msg="Checking image status: docker.io/nginx:latest" id=55dfdade-4379-44b0-92c6-c391a9b9c086 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:29:21 functional-885247 crio[3896]: time="2023-12-12 00:29:21.974416858Z" level=info msg="Image docker.io/nginx:latest not found" id=55dfdade-4379-44b0-92c6-c391a9b9c086 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:29:34 functional-885247 crio[3896]: time="2023-12-12 00:29:34.724894322Z" level=info msg="Checking image status: docker.io/nginx:latest" id=c509da59-833a-4425-b149-bd8feebc6b23 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:29:34 functional-885247 crio[3896]: time="2023-12-12 00:29:34.725183421Z" level=info msg="Image docker.io/nginx:latest not found" id=c509da59-833a-4425-b149-bd8feebc6b23 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:29:51 functional-885247 crio[3896]: time="2023-12-12 00:29:51.842466400Z" level=info msg="Pulling image: docker.io/nginx:latest" id=6c75d526-83f7-4da0-bb7d-59cf8d2c66af name=/runtime.v1.ImageService/PullImage
	Dec 12 00:29:51 functional-885247 crio[3896]: time="2023-12-12 00:29:51.844730694Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 12 00:30:07 functional-885247 crio[3896]: time="2023-12-12 00:30:07.725218687Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ff46c6f8-5511-49c4-bfdf-9ffbe7e5103c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:30:07 functional-885247 crio[3896]: time="2023-12-12 00:30:07.725472924Z" level=info msg="Image docker.io/nginx:alpine not found" id=ff46c6f8-5511-49c4-bfdf-9ffbe7e5103c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:30:22 functional-885247 crio[3896]: time="2023-12-12 00:30:22.724731659Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=35ab27b9-6a82-412e-b24a-30a776a39434 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:30:22 functional-885247 crio[3896]: time="2023-12-12 00:30:22.724951041Z" level=info msg="Image docker.io/nginx:alpine not found" id=35ab27b9-6a82-412e-b24a-30a776a39434 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:30:52 functional-885247 crio[3896]: time="2023-12-12 00:30:52.344574147Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=21ca30ab-978b-40ac-becb-94bef37c34ef name=/runtime.v1.ImageService/PullImage
	Dec 12 00:30:52 functional-885247 crio[3896]: time="2023-12-12 00:30:52.346613143Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 12 00:31:07 functional-885247 crio[3896]: time="2023-12-12 00:31:07.724498560Z" level=info msg="Checking image status: docker.io/nginx:latest" id=5f90bc0a-c5dd-4184-b5f7-c685ba5f940c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:31:07 functional-885247 crio[3896]: time="2023-12-12 00:31:07.724722791Z" level=info msg="Image docker.io/nginx:latest not found" id=5f90bc0a-c5dd-4184-b5f7-c685ba5f940c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:31:22 functional-885247 crio[3896]: time="2023-12-12 00:31:22.724201106Z" level=info msg="Checking image status: docker.io/nginx:latest" id=27e74b58-a6ec-42c0-acd9-c886319fa50c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:31:22 functional-885247 crio[3896]: time="2023-12-12 00:31:22.724420373Z" level=info msg="Image docker.io/nginx:latest not found" id=27e74b58-a6ec-42c0-acd9-c886319fa50c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:31:22 functional-885247 crio[3896]: time="2023-12-12 00:31:22.725567014Z" level=info msg="Pulling image: docker.io/nginx:latest" id=b5a25603-85ff-42b7-96b3-f625e029ec23 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:31:22 functional-885247 crio[3896]: time="2023-12-12 00:31:22.727633693Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9bcf0f2c1713b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       2                   09c279953d33e       storage-provisioner
	cc89be248e171       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   3 minutes ago       Running             kindnet-cni               2                   dee35c14ebcd1       kindnet-q94g5
	85daf90e1a14d       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   3 minutes ago       Running             kube-proxy                2                   69a71f2200248       kube-proxy-ls4xf
	d305c5b48952f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   3 minutes ago       Running             coredns                   2                   c50523ec72897       coredns-5dd5756b68-hfstc
	f5999a8c9aa58       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   3 minutes ago       Running             etcd                      2                   530ead106ebfb       etcd-functional-885247
	8b89f33801286       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   3 minutes ago       Running             kube-scheduler            2                   7126037a44d56       kube-scheduler-functional-885247
	c87978b379d21       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   3 minutes ago       Running             kube-apiserver            0                   eb4da73bd8e1c       kube-apiserver-functional-885247
	3110bad9a246a       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   3 minutes ago       Running             kube-controller-manager   2                   1a6ccf55d49b4       kube-controller-manager-functional-885247
	3eebb53719b04       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   4 minutes ago       Exited              coredns                   1                   c50523ec72897       coredns-5dd5756b68-hfstc
	9cd722191fc33       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   4 minutes ago       Exited              kube-proxy                1                   69a71f2200248       kube-proxy-ls4xf
	2c54ff2857a72       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   4 minutes ago       Exited              kindnet-cni               1                   dee35c14ebcd1       kindnet-q94g5
	6b9b0b97d1c05       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   4 minutes ago       Exited              kube-scheduler            1                   7126037a44d56       kube-scheduler-functional-885247
	6ba90d5c93c88       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   4 minutes ago       Exited              kube-controller-manager   1                   1a6ccf55d49b4       kube-controller-manager-functional-885247
	d2fe0dff992ba       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       1                   09c279953d33e       storage-provisioner
	75cacdd655892       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   4 minutes ago       Exited              etcd                      1                   530ead106ebfb       etcd-functional-885247
	
	* 
	* ==> coredns [3eebb53719b044ff0f8bfa01642c6aa7d668c9954c4fcd8d3ee434c139de9981] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58425 - 20143 "HINFO IN 7855914755466958202.4809755501867559821. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013881542s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [d305c5b48952f1b45c93db24c140f7f72a87bb5261db4df8da80d696ac135fe4] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40471 - 40254 "HINFO IN 1060733692326733094.7955735256448836773. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038242356s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-885247
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-885247
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=functional-885247
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T00_26_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:26:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-885247
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:31:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:27:53 +0000   Tue, 12 Dec 2023 00:26:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:27:53 +0000   Tue, 12 Dec 2023 00:26:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:27:53 +0000   Tue, 12 Dec 2023 00:26:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:27:53 +0000   Tue, 12 Dec 2023 00:26:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-885247
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 38ff079624a94e72a7f691790526677c
	  System UUID:                a1a5a11c-a0cc-4e00-9639-c2a884b27985
	  Boot ID:                    1e71add7-2409-4eb4-97fc-c7110220f3c5
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-5dd5756b68-hfstc                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m6s
	  kube-system                 etcd-functional-885247                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m19s
	  kube-system                 kindnet-q94g5                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m7s
	  kube-system                 kube-apiserver-functional-885247             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 kube-controller-manager-functional-885247    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-proxy-ls4xf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-scheduler-functional-885247             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m4s                   kube-proxy       
	  Normal   Starting                 3m30s                  kube-proxy       
	  Normal   Starting                 4m11s                  kube-proxy       
	  Normal   Starting                 5m27s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node functional-885247 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node functional-885247 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m27s (x8 over 5m27s)  kubelet          Node functional-885247 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     5m19s                  kubelet          Node functional-885247 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m19s                  kubelet          Node functional-885247 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m19s                  kubelet          Node functional-885247 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m19s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m7s                   node-controller  Node functional-885247 event: Registered Node functional-885247 in Controller
	  Normal   NodeReady                4m35s                  kubelet          Node functional-885247 status is now: NodeReady
	  Warning  ContainerGCFailed        4m19s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m                     node-controller  Node functional-885247 event: Registered Node functional-885247 in Controller
	  Normal   Starting                 3m37s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m36s (x8 over 3m37s)  kubelet          Node functional-885247 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m36s (x8 over 3m37s)  kubelet          Node functional-885247 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m36s (x8 over 3m37s)  kubelet          Node functional-885247 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m20s                  node-controller  Node functional-885247 event: Registered Node functional-885247 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001096] FS-Cache: O-key=[8] '51613b0000000000'
	[  +0.000797] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001058] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=000000009ed47378
	[  +0.001097] FS-Cache: N-key=[8] '51613b0000000000'
	[  +0.004696] FS-Cache: Duplicate cookie detected
	[  +0.000742] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000006ac44817
	[  +0.001133] FS-Cache: O-key=[8] '51613b0000000000'
	[  +0.000752] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001015] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=00000000b962c00a
	[  +0.001103] FS-Cache: N-key=[8] '51613b0000000000'
	[  +3.096598] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000002fc1e9d2
	[  +0.001145] FS-Cache: O-key=[8] '50613b0000000000'
	[  +0.000744] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000970] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=000000009ed47378
	[  +0.001095] FS-Cache: N-key=[8] '50613b0000000000'
	[  +0.330575] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=00000000caee5792
	[  +0.001154] FS-Cache: O-key=[8] '56613b0000000000'
	[  +0.000744] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=0000000001854e73
	[  +0.001084] FS-Cache: N-key=[8] '56613b0000000000'
	
	* 
	* ==> etcd [75cacdd655892e4f37d19930227122ee9232273b9a2142a2d978b0b4aa0b6224] <==
	* {"level":"info","ts":"2023-12-12T00:27:09.101709Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:27:10.245284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T00:27:10.245396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T00:27:10.245459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-12T00:27:10.245504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T00:27:10.245539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-12-12T00:27:10.245585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2023-12-12T00:27:10.245619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-12-12T00:27:10.246571Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-885247 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T00:27:10.246648Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:27:10.247752Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-12T00:27:10.246683Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:27:10.254244Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T00:27:10.25482Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T00:27:10.254884Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T00:27:37.834715Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-12T00:27:37.834774Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-885247","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2023-12-12T00:27:37.834836Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T00:27:37.834916Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T00:27:37.865354Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T00:27:37.865485Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-12T00:27:37.865555Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-12-12T00:27:37.868388Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:27:37.868534Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:27:37.868588Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-885247","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [f5999a8c9aa58402eee6c72967efe716973b6af891a79e31f0d013a2da05baa5] <==
	* {"level":"info","ts":"2023-12-12T00:27:49.732402Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T00:27:49.731068Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-12-12T00:27:49.731182Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T00:27:49.732612Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T00:27:49.732671Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T00:27:49.731442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-12T00:27:49.733798Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-12T00:27:49.731519Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:27:49.734189Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-12T00:27:49.734073Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:27:49.734313Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:27:50.909296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2023-12-12T00:27:50.909422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2023-12-12T00:27:50.909469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-12-12T00:27:50.909514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2023-12-12T00:27:50.909546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-12-12T00:27:50.909582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2023-12-12T00:27:50.909627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-12-12T00:27:50.916356Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-885247 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T00:27:50.917305Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:27:50.918399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-12T00:27:50.918778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:27:50.919586Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T00:27:50.92931Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T00:27:50.929397Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:31:25 up  7:13,  0 users,  load average: 0.18, 0.63, 0.60
	Linux functional-885247 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [2c54ff2857a72c4b4ea328dd0091cb770a0fab5c5afa5e9aefed96355b2b5ad4] <==
	* I1212 00:27:08.822982       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 00:27:08.823231       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1212 00:27:08.837641       1 main.go:116] setting mtu 1500 for CNI 
	I1212 00:27:08.837744       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 00:27:08.837783       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 00:27:13.073982       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:27:13.074028       1 main.go:227] handling current node
	I1212 00:27:23.089776       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:27:23.089806       1 main.go:227] handling current node
	I1212 00:27:33.102695       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:27:33.102728       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [cc89be248e17194adc898d55ff8c9d6fb2cf28ab47da21bfde8270c65026c9fa] <==
	* I1212 00:29:24.710121       1 main.go:227] handling current node
	I1212 00:29:34.721381       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:29:34.721405       1 main.go:227] handling current node
	I1212 00:29:44.728532       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:29:44.728562       1 main.go:227] handling current node
	I1212 00:29:54.732149       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:29:54.732178       1 main.go:227] handling current node
	I1212 00:30:04.744231       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:30:04.744264       1 main.go:227] handling current node
	I1212 00:30:14.750376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:30:14.750408       1 main.go:227] handling current node
	I1212 00:30:24.755100       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:30:24.755128       1 main.go:227] handling current node
	I1212 00:30:34.765880       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:30:34.765912       1 main.go:227] handling current node
	I1212 00:30:44.770232       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:30:44.770262       1 main.go:227] handling current node
	I1212 00:30:54.774472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:30:54.774503       1 main.go:227] handling current node
	I1212 00:31:04.777922       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:31:04.777951       1 main.go:227] handling current node
	I1212 00:31:14.790344       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:31:14.790372       1 main.go:227] handling current node
	I1212 00:31:24.801559       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:31:24.801855       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c87978b379d21aed1927974820305541c63029b925fb60115ce73039c224b4ed] <==
	* I1212 00:27:53.415044       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1212 00:27:53.589031       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:27:53.615409       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 00:27:53.615636       1 aggregator.go:166] initial CRD sync complete...
	I1212 00:27:53.615698       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 00:27:53.615744       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:27:53.615856       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:27:53.626455       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 00:27:53.626677       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 00:27:53.626698       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 00:27:53.626713       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:27:53.628052       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 00:27:53.634231       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 00:27:53.634941       1 shared_informer.go:318] Caches are synced for configmaps
	E1212 00:27:53.645890       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 00:27:54.336079       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:27:55.441420       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 00:27:55.591755       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 00:27:55.602883       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 00:27:55.700411       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:27:55.712078       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:28:11.690619       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 00:28:11.830540       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.3.190"}
	I1212 00:28:11.859509       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:28:18.278401       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.21.197"}
	
	* 
	* ==> kube-controller-manager [3110bad9a246a5e3a021d8db41d9bc09e5ddbdd37b29ed73b058f8c22821de1b] <==
	* I1212 00:28:05.897433       1 shared_informer.go:318] Caches are synced for deployment
	I1212 00:28:05.900436       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1212 00:28:05.902709       1 shared_informer.go:318] Caches are synced for ephemeral
	I1212 00:28:05.905584       1 shared_informer.go:318] Caches are synced for persistent volume
	I1212 00:28:05.905671       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1212 00:28:05.910217       1 shared_informer.go:318] Caches are synced for PV protection
	I1212 00:28:05.910971       1 shared_informer.go:318] Caches are synced for expand
	I1212 00:28:05.912714       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1212 00:28:05.919029       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1212 00:28:05.921331       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1212 00:28:05.921434       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1212 00:28:05.923524       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1212 00:28:05.928471       1 shared_informer.go:318] Caches are synced for service account
	I1212 00:28:05.947454       1 shared_informer.go:318] Caches are synced for HPA
	I1212 00:28:05.960111       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1212 00:28:05.978679       1 shared_informer.go:318] Caches are synced for namespace
	I1212 00:28:05.985761       1 shared_informer.go:318] Caches are synced for disruption
	I1212 00:28:06.003931       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:28:06.004478       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:28:06.089974       1 shared_informer.go:318] Caches are synced for attach detach
	I1212 00:28:06.460579       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:28:06.467796       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:28:06.467831       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 00:28:22.869754       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 00:28:22.869929       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	* 
	* ==> kube-controller-manager [6ba90d5c93c882b7c101e94fc79912cb65497daa618adc31a3ff5b5d57779dc6] <==
	* I1212 00:27:25.507213       1 shared_informer.go:318] Caches are synced for service account
	I1212 00:27:25.508274       1 shared_informer.go:318] Caches are synced for namespace
	I1212 00:27:25.509406       1 shared_informer.go:318] Caches are synced for crt configmap
	I1212 00:27:25.511549       1 shared_informer.go:318] Caches are synced for daemon sets
	I1212 00:27:25.514267       1 shared_informer.go:318] Caches are synced for expand
	I1212 00:27:25.514368       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1212 00:27:25.514650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.975µs"
	I1212 00:27:25.518590       1 shared_informer.go:318] Caches are synced for TTL
	I1212 00:27:25.519716       1 shared_informer.go:318] Caches are synced for job
	I1212 00:27:25.529707       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1212 00:27:25.540673       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1212 00:27:25.540691       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1212 00:27:25.540708       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1212 00:27:25.540717       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1212 00:27:25.565313       1 shared_informer.go:318] Caches are synced for deployment
	I1212 00:27:25.565418       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1212 00:27:25.601599       1 shared_informer.go:318] Caches are synced for disruption
	I1212 00:27:25.614451       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1212 00:27:25.615489       1 shared_informer.go:318] Caches are synced for stateful set
	I1212 00:27:25.634495       1 shared_informer.go:318] Caches are synced for cronjob
	I1212 00:27:25.644514       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:27:25.676842       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:27:26.028692       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:27:26.028795       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 00:27:26.079860       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [85daf90e1a14dc27bed9f6a3bf43e018b39fc74edf0ede7ed84ad4e53a9f75d9] <==
	* I1212 00:27:54.302415       1 server_others.go:69] "Using iptables proxy"
	I1212 00:27:54.321796       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1212 00:27:54.362073       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:27:54.366407       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:27:54.366450       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:27:54.366459       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:27:54.366507       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:27:54.366726       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:27:54.366742       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:27:54.381778       1 config.go:188] "Starting service config controller"
	I1212 00:27:54.381902       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:27:54.381988       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:27:54.382021       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:27:54.382572       1 config.go:315] "Starting node config controller"
	I1212 00:27:54.382628       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:27:54.482971       1 shared_informer.go:318] Caches are synced for node config
	I1212 00:27:54.483020       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:27:54.483050       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [9cd722191fc338aac7a89c2d00db5070b6a648ff832576463b971e453ab713e1] <==
	* I1212 00:27:11.910502       1 server_others.go:69] "Using iptables proxy"
	I1212 00:27:13.105621       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1212 00:27:13.443475       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:27:13.456351       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:27:13.456458       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:27:13.456493       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:27:13.456648       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:27:13.458943       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:27:13.459182       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:27:13.463086       1 config.go:188] "Starting service config controller"
	I1212 00:27:13.464163       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:27:13.466362       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:27:13.466442       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:27:13.470984       1 config.go:315] "Starting node config controller"
	I1212 00:27:13.471984       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:27:13.567360       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:27:13.567465       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:27:13.575910       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6b9b0b97d1c05851d668286a34fcf837e4ae49285783665afaee0ae278d58028] <==
	* I1212 00:27:12.009226       1 serving.go:348] Generated self-signed cert in-memory
	I1212 00:27:13.555407       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 00:27:13.555507       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:27:13.563869       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 00:27:13.564089       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1212 00:27:13.564737       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1212 00:27:13.564118       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:27:13.565615       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:27:13.564133       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 00:27:13.565970       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1212 00:27:13.564146       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 00:27:13.665556       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1212 00:27:13.665719       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:27:13.666854       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1212 00:27:37.826573       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1212 00:27:37.827099       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [8b89f33801286118a5eb14449c35aafd6842456e1ecaf1f0dfa8bf7dc36f3b12] <==
	* I1212 00:27:50.958687       1 serving.go:348] Generated self-signed cert in-memory
	W1212 00:27:53.550011       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:27:53.550055       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:27:53.550065       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:27:53.550071       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:27:53.594152       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 00:27:53.594252       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:27:53.596418       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:27:53.596476       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:27:53.597212       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 00:27:53.597352       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 00:27:53.697211       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.861129    4165 manager.go:1106] Failed to create existing container: /docker/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/crio-15cc41a8d32b2202b0c75b2bcc0bdff525d762e63e012ca6741e8399801077be: Error finding container 15cc41a8d32b2202b0c75b2bcc0bdff525d762e63e012ca6741e8399801077be: Status 404 returned error can't find the container with id 15cc41a8d32b2202b0c75b2bcc0bdff525d762e63e012ca6741e8399801077be
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.861419    4165 manager.go:1106] Failed to create existing container: /docker/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/crio-d198833809db3e60892f32c9446892384d8ed7606bd94ffb2052ed2f45b7aab6: Error finding container d198833809db3e60892f32c9446892384d8ed7606bd94ffb2052ed2f45b7aab6: Status 404 returned error can't find the container with id d198833809db3e60892f32c9446892384d8ed7606bd94ffb2052ed2f45b7aab6
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.861796    4165 manager.go:1106] Failed to create existing container: /crio-7126037a44d56acbb8c0756119a868b44d30b456f9b041548040f31eac424284: Error finding container 7126037a44d56acbb8c0756119a868b44d30b456f9b041548040f31eac424284: Status 404 returned error can't find the container with id 7126037a44d56acbb8c0756119a868b44d30b456f9b041548040f31eac424284
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.862031    4165 manager.go:1106] Failed to create existing container: /docker/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/crio-1a6ccf55d49b44650667c00b79aae9157c3f4c28ffd5b6c581d4cd790168b434: Error finding container 1a6ccf55d49b44650667c00b79aae9157c3f4c28ffd5b6c581d4cd790168b434: Status 404 returned error can't find the container with id 1a6ccf55d49b44650667c00b79aae9157c3f4c28ffd5b6c581d4cd790168b434
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.862307    4165 manager.go:1106] Failed to create existing container: /docker/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/crio-dee35c14ebcd11af131c5c6a21ae6cccf1f25cdf304612b0cdba561f96756499: Error finding container dee35c14ebcd11af131c5c6a21ae6cccf1f25cdf304612b0cdba561f96756499: Status 404 returned error can't find the container with id dee35c14ebcd11af131c5c6a21ae6cccf1f25cdf304612b0cdba561f96756499
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.862512    4165 manager.go:1106] Failed to create existing container: /crio-dee35c14ebcd11af131c5c6a21ae6cccf1f25cdf304612b0cdba561f96756499: Error finding container dee35c14ebcd11af131c5c6a21ae6cccf1f25cdf304612b0cdba561f96756499: Status 404 returned error can't find the container with id dee35c14ebcd11af131c5c6a21ae6cccf1f25cdf304612b0cdba561f96756499
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.862770    4165 manager.go:1106] Failed to create existing container: /crio-d198833809db3e60892f32c9446892384d8ed7606bd94ffb2052ed2f45b7aab6: Error finding container d198833809db3e60892f32c9446892384d8ed7606bd94ffb2052ed2f45b7aab6: Status 404 returned error can't find the container with id d198833809db3e60892f32c9446892384d8ed7606bd94ffb2052ed2f45b7aab6
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.863013    4165 manager.go:1106] Failed to create existing container: /docker/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/crio-09c279953d33e424959fc8b620ddbc8dcf2d5004d32ae3b1c2e45e0de4c50a63: Error finding container 09c279953d33e424959fc8b620ddbc8dcf2d5004d32ae3b1c2e45e0de4c50a63: Status 404 returned error can't find the container with id 09c279953d33e424959fc8b620ddbc8dcf2d5004d32ae3b1c2e45e0de4c50a63
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.863315    4165 manager.go:1106] Failed to create existing container: /docker/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/crio-530ead106ebfb33feb9fe5b290e01aecd93d4460559e1bc41e8fe64f5d1ff342: Error finding container 530ead106ebfb33feb9fe5b290e01aecd93d4460559e1bc41e8fe64f5d1ff342: Status 404 returned error can't find the container with id 530ead106ebfb33feb9fe5b290e01aecd93d4460559e1bc41e8fe64f5d1ff342
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.863564    4165 manager.go:1106] Failed to create existing container: /docker/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/crio-7126037a44d56acbb8c0756119a868b44d30b456f9b041548040f31eac424284: Error finding container 7126037a44d56acbb8c0756119a868b44d30b456f9b041548040f31eac424284: Status 404 returned error can't find the container with id 7126037a44d56acbb8c0756119a868b44d30b456f9b041548040f31eac424284
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.863858    4165 manager.go:1106] Failed to create existing container: /crio-c50523ec72897d015d0c6d1b0c872d632b2d52b31a4345eed65ecb54088290b4: Error finding container c50523ec72897d015d0c6d1b0c872d632b2d52b31a4345eed65ecb54088290b4: Status 404 returned error can't find the container with id c50523ec72897d015d0c6d1b0c872d632b2d52b31a4345eed65ecb54088290b4
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.864093    4165 manager.go:1106] Failed to create existing container: /docker/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/crio-c50523ec72897d015d0c6d1b0c872d632b2d52b31a4345eed65ecb54088290b4: Error finding container c50523ec72897d015d0c6d1b0c872d632b2d52b31a4345eed65ecb54088290b4: Status 404 returned error can't find the container with id c50523ec72897d015d0c6d1b0c872d632b2d52b31a4345eed65ecb54088290b4
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.864307    4165 manager.go:1106] Failed to create existing container: /crio-1a6ccf55d49b44650667c00b79aae9157c3f4c28ffd5b6c581d4cd790168b434: Error finding container 1a6ccf55d49b44650667c00b79aae9157c3f4c28ffd5b6c581d4cd790168b434: Status 404 returned error can't find the container with id 1a6ccf55d49b44650667c00b79aae9157c3f4c28ffd5b6c581d4cd790168b434
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.864512    4165 manager.go:1106] Failed to create existing container: /crio-530ead106ebfb33feb9fe5b290e01aecd93d4460559e1bc41e8fe64f5d1ff342: Error finding container 530ead106ebfb33feb9fe5b290e01aecd93d4460559e1bc41e8fe64f5d1ff342: Status 404 returned error can't find the container with id 530ead106ebfb33feb9fe5b290e01aecd93d4460559e1bc41e8fe64f5d1ff342
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.864749    4165 manager.go:1106] Failed to create existing container: /crio-09c279953d33e424959fc8b620ddbc8dcf2d5004d32ae3b1c2e45e0de4c50a63: Error finding container 09c279953d33e424959fc8b620ddbc8dcf2d5004d32ae3b1c2e45e0de4c50a63: Status 404 returned error can't find the container with id 09c279953d33e424959fc8b620ddbc8dcf2d5004d32ae3b1c2e45e0de4c50a63
	Dec 12 00:30:48 functional-885247 kubelet[4165]: E1212 00:30:48.865000    4165 manager.go:1106] Failed to create existing container: /docker/d89df14e9405b60c8487b14bc54f81f03ec29ca7cf0a04eb11dd6ffcaaa54960/crio-69a71f2200248dc75ccd332c7ca444b5b8b9b2a5d271181f50dc8e2d89658caf: Error finding container 69a71f2200248dc75ccd332c7ca444b5b8b9b2a5d271181f50dc8e2d89658caf: Status 404 returned error can't find the container with id 69a71f2200248dc75ccd332c7ca444b5b8b9b2a5d271181f50dc8e2d89658caf
	Dec 12 00:30:52 functional-885247 kubelet[4165]: E1212 00:30:52.343884    4165 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 12 00:30:52 functional-885247 kubelet[4165]: E1212 00:30:52.343937    4165 kuberuntime_image.go:53] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 12 00:30:52 functional-885247 kubelet[4165]: E1212 00:30:52.344151    4165 kuberuntime_manager.go:1261] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8c9zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod sp-pod_default(977de9af-f406-
4174-970f-2e5b50d0b31f): ErrImagePull: loading manifest for target platform: reading manifest sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 12 00:30:52 functional-885247 kubelet[4165]: E1212 00:30:52.344193    4165 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="977de9af-f406-4174-970f-2e5b50d0b31f"
	Dec 12 00:31:07 functional-885247 kubelet[4165]: E1212 00:31:07.724958    4165 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="977de9af-f406-4174-970f-2e5b50d0b31f"
	Dec 12 00:31:22 functional-885247 kubelet[4165]: E1212 00:31:22.612708    4165 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 12 00:31:22 functional-885247 kubelet[4165]: E1212 00:31:22.612760    4165 kuberuntime_image.go:53] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 12 00:31:22 functional-885247 kubelet[4165]: E1212 00:31:22.612858    4165 kuberuntime_manager.go:1261] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twn6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(0815f656-0ba3-4e2e-9df2-ec1f6d02e4f
8): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 12 00:31:22 functional-885247 kubelet[4165]: E1212 00:31:22.612899    4165 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="0815f656-0ba3-4e2e-9df2-ec1f6d02e4f8"
	
	* 
	* ==> storage-provisioner [9bcf0f2c1713b26424b422db6ceaf994c0e5e6b9459e92d635eee86583df5222] <==
	* I1212 00:27:54.268472       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:27:54.294077       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:27:54.294305       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:28:11.695935       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:28:11.696149       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa2012c9-14de-4f07-bcaf-01bdc87917a2", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-885247_80d05831-1f6a-493b-b910-8eaa0845d750 became leader
	I1212 00:28:11.697223       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-885247_80d05831-1f6a-493b-b910-8eaa0845d750!
	I1212 00:28:11.801333       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-885247_80d05831-1f6a-493b-b910-8eaa0845d750!
	I1212 00:28:22.872560       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1212 00:28:22.872627       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    93dbdd88-ced1-4af5-af98-3cbb8d98e2a1 364 0 2023-12-12 00:26:18 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-12-12 00:26:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-504e9958-6083-4000-9aa7-ecd4ade18766 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  504e9958-6083-4000-9aa7-ecd4ade18766 684 0 2023-12-12 00:28:22 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-12-12 00:28:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-12-12 00:28:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1212 00:28:22.876773       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-504e9958-6083-4000-9aa7-ecd4ade18766" provisioned
	I1212 00:28:22.876801       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1212 00:28:22.876808       1 volume_store.go:212] Trying to save persistentvolume "pvc-504e9958-6083-4000-9aa7-ecd4ade18766"
	I1212 00:28:22.890263       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"504e9958-6083-4000-9aa7-ecd4ade18766", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1212 00:28:22.915175       1 volume_store.go:219] persistentvolume "pvc-504e9958-6083-4000-9aa7-ecd4ade18766" saved
	I1212 00:28:22.915757       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"504e9958-6083-4000-9aa7-ecd4ade18766", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-504e9958-6083-4000-9aa7-ecd4ade18766
	
	* 
	* ==> storage-provisioner [d2fe0dff992ba2c20f40895b00210ceeaca8893cbb7bbfdcc7269f5ca8b8aa06] <==
	* I1212 00:27:09.781654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:27:13.102847       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:27:13.103079       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:27:30.538590       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:27:30.538860       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-885247_7ed7eb08-b8c0-435d-ab6e-0e269c393dde!
	I1212 00:27:30.538989       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa2012c9-14de-4f07-bcaf-01bdc87917a2", APIVersion:"v1", ResourceVersion:"541", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-885247_7ed7eb08-b8c0-435d-ab6e-0e269c393dde became leader
	I1212 00:27:30.640833       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-885247_7ed7eb08-b8c0-435d-ab6e-0e269c393dde!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-885247 -n functional-885247
helpers_test.go:261: (dbg) Run:  kubectl --context functional-885247 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-885247 describe pod nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-885247 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-885247/192.168.49.2
	Start Time:       Tue, 12 Dec 2023 00:28:18 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-twn6r (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-twn6r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m8s                 default-scheduler  Successfully assigned default/nginx-svc to functional-885247
	  Warning  Failed     2m35s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    79s (x2 over 2m35s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     79s (x2 over 2m35s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    64s (x3 over 3m8s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4s (x3 over 2m35s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4s (x2 over 95s)     kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-885247/192.168.49.2
	Start Time:       Tue, 12 Dec 2023 00:28:23 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8c9zf (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-8c9zf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m3s                default-scheduler  Successfully assigned default/sp-pod to functional-885247
	  Warning  Failed     2m5s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     34s (x2 over 2m5s)  kubelet            Error: ErrImagePull
	  Warning  Failed     34s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    19s (x2 over 2m5s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     19s (x2 over 2m5s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    4s (x3 over 3m3s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-885247 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0815f656-0ba3-4e2e-9df2-ec1f6d02e4f8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-885247 -n functional-885247
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2023-12-12 00:32:18.681275812 +0000 UTC m=+1282.036262177
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-885247 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-885247 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-885247/192.168.49.2
Start Time:       Tue, 12 Dec 2023 00:28:18 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-twn6r (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-twn6r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-885247
Warning  Failed     3m27s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     56s (x3 over 3m27s)  kubelet            Error: ErrImagePull
Warning  Failed     56s (x2 over 2m27s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    18s (x5 over 3m27s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     18s (x5 over 3m27s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    3s (x4 over 4m)      kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-885247 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-885247 logs nginx-svc -n default: exit status 1 (96.048237ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-885247 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.99s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (109s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-885247 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.103.21.197   10.103.21.197   80:32043/TCP   5m49s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (109.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-996779 addons enable ingress --alsologtostderr -v=5
E1212 00:38:17.506895 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:17.512492 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:17.522836 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:17.543151 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:17.583671 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:17.664077 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:17.824510 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:18.145126 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:18.786201 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:20.066426 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:22.627096 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:27.747292 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:37.988151 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:38:58.468370 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:39:31.003705 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:39:39.428987 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:41:01.349949 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-996779 addons enable ingress --alsologtostderr -v=5: exit status 10 (6m0.939162475s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:35:39.202138 1147090 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:35:39.202979 1147090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:39.203018 1147090 out.go:309] Setting ErrFile to fd 2...
	I1212 00:35:39.203038 1147090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:39.203326 1147090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:35:39.203699 1147090 mustload.go:65] Loading cluster: ingress-addon-legacy-996779
	I1212 00:35:39.204130 1147090 config.go:182] Loaded profile config "ingress-addon-legacy-996779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 00:35:39.204194 1147090 addons.go:594] checking whether the cluster is paused
	I1212 00:35:39.204325 1147090 config.go:182] Loaded profile config "ingress-addon-legacy-996779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 00:35:39.204376 1147090 host.go:66] Checking if "ingress-addon-legacy-996779" exists ...
	I1212 00:35:39.204902 1147090 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:35:39.223106 1147090 ssh_runner.go:195] Run: systemctl --version
	I1212 00:35:39.223169 1147090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:35:39.241268 1147090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:35:39.338789 1147090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:35:39.338865 1147090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:35:39.385945 1147090 cri.go:89] found id: "ff8f0eb44727154b75577edcbe42bd079daebaf3ab30852d958fbb8e0f0324b7"
	I1212 00:35:39.385970 1147090 cri.go:89] found id: "3b77b68bac2b04afc3e7d721d997ed77ddd55c75453a2536c06e9f802f3f8a01"
	I1212 00:35:39.385976 1147090 cri.go:89] found id: "fe0391b007c1a6fc35e858c2018dbca95ee2d82e45f004a50d9e9b5c92625d45"
	I1212 00:35:39.385981 1147090 cri.go:89] found id: "98591e814415a7f68a501b413ac3dea0b90d3e1f3d46ecf22ae957d501b471d1"
	I1212 00:35:39.385986 1147090 cri.go:89] found id: "fa5a904c833a9ec3d6a6ecb36751bb27ec22964245bbc48fc71c4c8ef086ed32"
	I1212 00:35:39.385991 1147090 cri.go:89] found id: "37cc807eb8db0b61a416564775bbeecb1cea6629f4a34a259723e681c4a15aca"
	I1212 00:35:39.386003 1147090 cri.go:89] found id: "ccc7574d027de156202827c2d3c6f2f08c572f0da026556be7af1066e9f751ea"
	I1212 00:35:39.386008 1147090 cri.go:89] found id: "b37841b0ca7e6c583e5f1b2bf62b18bba025f9ac412204bf622cf40da1944da1"
	I1212 00:35:39.386012 1147090 cri.go:89] found id: ""
	I1212 00:35:39.386065 1147090 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:35:39.413919 1147090 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"37cc807eb8db0b61a416564775bbeecb1cea6629f4a34a259723e681c4a15aca","pid":1487,"status":"running","bundle":"/run/containers/storage/overlay-containers/37cc807eb8db0b61a416564775bbeecb1cea6629f4a34a259723e681c4a15aca/userdata","rootfs":"/var/lib/containers/storage/overlay/9af50763b0bf48f74b4bda27934c792422926cc34a00a43351c1151f3d5810fe/merged","created":"2023-12-12T00:34:53.920795675Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fd1dd8ff","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fd1dd8ff\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"37cc807eb8db0b61a416564775bbeecb1cea6629f4a34a259723e681c4a15aca","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:34:53.829021338Z","io.kubernetes.cri-o.Image":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.18.20","io.kubernetes.cri-o.ImageRef":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ingress-addon-legacy-996779\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"78b40af95c64e5112ac985f00b18628c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ingress-addon-legacy-996779_78b40af95c64e5112ac985f00b18628c/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":
\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9af50763b0bf48f74b4bda27934c792422926cc34a00a43351c1151f3d5810fe/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-996779_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f96b59e1d42bf1d5ba8bd698b662906ae6eb2ceec331c602d0bcd78ad75206d2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f96b59e1d42bf1d5ba8bd698b662906ae6eb2ceec331c602d0bcd78ad75206d2","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ingress-addon-legacy-996779_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/containers/kube-apiserver/f
fa17926\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","
io.kubernetes.pod.name":"kube-apiserver-ingress-addon-legacy-996779","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"78b40af95c64e5112ac985f00b18628c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"78b40af95c64e5112ac985f00b18628c","kubernetes.io/config.seen":"2023-12-12T00:34:50.030893082Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b77b68bac2b04afc3e7d721d997ed77ddd55c75453a2536c06e9f802f3f8a01","pid":2230,"status":"running","bundle":"/run/containers/storage/overlay-containers/3b77b68bac2b04afc3e7d721d997ed77ddd55c75453a2536c06e9f802f3f8a01/userdata","rootfs":"/var/lib/containers/storage/overlay/3652b3ca4ecd6e0c09ce5e2725e7aca3ecc29c9ed52723718ba1056ed5b85799/merged","created":"2023-12-12T00:35:32.235999111Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"653ba7c5","io.kubernetes.container.name":"co
redns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"653ba7c5\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","
io.kubernetes.cri-o.ContainerID":"3b77b68bac2b04afc3e7d721d997ed77ddd55c75453a2536c06e9f802f3f8a01","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:35:32.195880707Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.7","io.kubernetes.cri-o.ImageRef":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bff467f8-fdsk9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f4a5ac98-fd88-41d5-a8f9-70a22dfca002\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bff467f8-fdsk9_f4a5ac98-fd88-41d5-a8f9-70a22dfca002/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3652b3ca4ecd
6e0c09ce5e2725e7aca3ecc29c9ed52723718ba1056ed5b85799/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bff467f8-fdsk9_kube-system_f4a5ac98-fd88-41d5-a8f9-70a22dfca002_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/78b029263a8b4fb0c54e2b3c8dd775cfc8fe2f932c5f22d92948161f753f4119/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"78b029263a8b4fb0c54e2b3c8dd775cfc8fe2f932c5f22d92948161f753f4119","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bff467f8-fdsk9_kube-system_f4a5ac98-fd88-41d5-a8f9-70a22dfca002_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/f4a5ac98-fd88-41d5-a8f9-70a22dfca002/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/li
b/kubelet/pods/f4a5ac98-fd88-41d5-a8f9-70a22dfca002/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f4a5ac98-fd88-41d5-a8f9-70a22dfca002/containers/coredns/27c45834\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f4a5ac98-fd88-41d5-a8f9-70a22dfca002/volumes/kubernetes.io~secret/coredns-token-5x4m7\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bff467f8-fdsk9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f4a5ac98-fd88-41d5-a8f9-70a22dfca002","kubernetes.io/config.seen":"2023-12-12T00:35:31.592272820Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"98591e814415a7f68a501b413ac3dea0b90d3e1f3d46ecf22ae957d501b471d1","p
id":1990,"status":"running","bundle":"/run/containers/storage/overlay-containers/98591e814415a7f68a501b413ac3dea0b90d3e1f3d46ecf22ae957d501b471d1/userdata","rootfs":"/var/lib/containers/storage/overlay/fa78c3972067d33732b7359e448205f94b54350cd05579a37a7b7ce599227226/merged","created":"2023-12-12T00:35:19.470767486Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"afab1ee9","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"afab1ee9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"98591e814415a7f68a501b413ac3dea0b90d3e1f3d46e
cf22ae957d501b471d1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:35:19.436760295Z","io.kubernetes.cri-o.Image":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.18.20","io.kubernetes.cri-o.ImageRef":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-d7hfm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d842c03e-6616-4f70-b70f-7c1e160858c9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-d7hfm_d842c03e-6616-4f70-b70f-7c1e160858c9/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fa78c3972067d33732b7359e448205f94b54350cd05579a37a7b7ce599227226/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-d7hfm
_kube-system_d842c03e-6616-4f70-b70f-7c1e160858c9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/39d923c7d6040898104615c6aff92a6853d33a8572146219ac531fa0af22a645/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"39d923c7d6040898104615c6aff92a6853d33a8572146219ac531fa0af22a645","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-d7hfm_kube-system_d842c03e-6616-4f70-b70f-7c1e160858c9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d842c03e-6616-4f70-b70f-7c1e160858c9/etc-hosts\",\"readonly\":false,
\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d842c03e-6616-4f70-b70f-7c1e160858c9/containers/kube-proxy/895d6015\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/d842c03e-6616-4f70-b70f-7c1e160858c9/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d842c03e-6616-4f70-b70f-7c1e160858c9/volumes/kubernetes.io~secret/kube-proxy-token-vrzsf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-d7hfm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d842c03e-6616-4f70-b70f-7c1e160858c9","kubernetes.io/config.seen":"2023-12-12T00:35:19.085357530Z","kubernetes
.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b37841b0ca7e6c583e5f1b2bf62b18bba025f9ac412204bf622cf40da1944da1","pid":1418,"status":"running","bundle":"/run/containers/storage/overlay-containers/b37841b0ca7e6c583e5f1b2bf62b18bba025f9ac412204bf622cf40da1944da1/userdata","rootfs":"/var/lib/containers/storage/overlay/37796e91261e92e6fb6dac1865b7a8dcb112b13958b1538298bfe83869130965/merged","created":"2023-12-12T00:34:53.689861645Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef5ef709","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5ef709\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy
\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b37841b0ca7e6c583e5f1b2bf62b18bba025f9ac412204bf622cf40da1944da1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:34:53.656925018Z","io.kubernetes.cri-o.Image":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.18.20","io.kubernetes.cri-o.ImageRef":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ingress-addon-legacy-996779\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d12e497b0008e22acbcd5a9cf2dd48ac\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ingress-addon-legacy-996779_d12e497b0008e22acbcd5a9cf2dd48ac/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\
"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/37796e91261e92e6fb6dac1865b7a8dcb112b13958b1538298bfe83869130965/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-996779_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/fb854e212adbfef9e5945edfc327f598a1dd4b517ad2792f442025222e5897a7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"fb854e212adbfef9e5945edfc327f598a1dd4b517ad2792f442025222e5897a7","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ingress-addon-legacy-996779_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/etc-hosts\",\"readonly\":false,\"propagation\":0,\"sel
inux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/containers/kube-scheduler/9525136c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ingress-addon-legacy-996779","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.hash":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.seen":"2023-12-12T00:34:50.035135442Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ccc7574d027de156202827c2d3c6f2f08c572f0da026556be7af1066e9f751ea","pid":1460,"status":"running","bundle":"/run/containers/storage/overlay-containers/ccc7574d027de156202827c2d3c6f2f08c57
2f0da026556be7af1066e9f751ea/userdata","rootfs":"/var/lib/containers/storage/overlay/f5d58825299ee2e6d3e5738311accbb8bc1866902624007dbdd022c6440fd7e9/merged","created":"2023-12-12T00:34:53.902965917Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ce880c0b","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ce880c0b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ccc7574d027de156202827c2d3c6f2f08c572f0da026556be7af1066e9f751ea","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023
-12-12T00:34:53.822011756Z","io.kubernetes.cri-o.Image":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.18.20","io.kubernetes.cri-o.ImageRef":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ingress-addon-legacy-996779\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"49b043cd68fd30a453bdf128db5271f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ingress-addon-legacy-996779_49b043cd68fd30a453bdf128db5271f3/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f5d58825299ee2e6d3e5738311accbb8bc1866902624007dbdd022c6440fd7e9/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-m
anager_kube-controller-manager-ingress-addon-legacy-996779_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ca79d23e526c6a9946bb0175911ac1be2bcf1953028f1c9cfae49e55a933f7ea/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ca79d23e526c6a9946bb0175911ac1be2bcf1953028f1c9cfae49e55a933f7ea","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ingress-addon-legacy-996779_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/containers/kube-controller-manager/7e855931\",\"readonly\":false,\
"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":fa
lse},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ingress-addon-legacy-996779","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.hash":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.seen":"2023-12-12T00:34:50.033318183Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fa5a904c833a9ec3d6a6ecb36751bb27ec22964245bbc48fc71c4c8ef086ed32","pid":1496,"status":"running","bundle":"/run/containers/storage/overlay-containers/fa5a904c833a9ec3d6a6ecb36751bb27ec22964245bbc48fc71c4c8ef086ed32/userdata","rootfs":"/var/lib/containers/storage/overlay/cca4305ee12a79ee662c0aaa91530d6677acfd02173b6f9b464e91327c4fc579/merged","created":"2023-12-12T00
:34:54.055363627Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e6be2cc0","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e6be2cc0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fa5a904c833a9ec3d6a6ecb36751bb27ec22964245bbc48fc71c4c8ef086ed32","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:34:53.857331229Z","io.kubernetes.cri-o.Image":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o
.ImageRef":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ingress-addon-legacy-996779\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c7b61660bac43f7648a1028efdfa9d2e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ingress-addon-legacy-996779_c7b61660bac43f7648a1028efdfa9d2e/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cca4305ee12a79ee662c0aaa91530d6677acfd02173b6f9b464e91327c4fc579/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ingress-addon-legacy-996779_kube-system_c7b61660bac43f7648a1028efdfa9d2e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f5bc34c7eaaad3351bd92a369136e06734047dfbea0f6543809ffaeccce2651d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f5bc34c7eaaad3351bd92a369136e06734047dfbea0f654380
9ffaeccce2651d","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ingress-addon-legacy-996779_kube-system_c7b61660bac43f7648a1028efdfa9d2e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c7b61660bac43f7648a1028efdfa9d2e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c7b61660bac43f7648a1028efdfa9d2e/containers/etcd/e4f3d886\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":fals
e}]","io.kubernetes.pod.name":"etcd-ingress-addon-legacy-996779","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c7b61660bac43f7648a1028efdfa9d2e","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"c7b61660bac43f7648a1028efdfa9d2e","kubernetes.io/config.seen":"2023-12-12T00:34:50.036245637Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fe0391b007c1a6fc35e858c2018dbca95ee2d82e45f004a50d9e9b5c92625d45","pid":2104,"status":"running","bundle":"/run/containers/storage/overlay-containers/fe0391b007c1a6fc35e858c2018dbca95ee2d82e45f004a50d9e9b5c92625d45/userdata","rootfs":"/var/lib/containers/storage/overlay/00bc88140bd0549fb76ad5496d8a7ea43671d782b6564366f894dab574f39c0b/merged","created":"2023-12-12T00:35:21.418442799Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"63720bd1","io.kubernetes.container.name":"kindnet-cni",
"io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"63720bd1\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fe0391b007c1a6fc35e858c2018dbca95ee2d82e45f004a50d9e9b5c92625d45","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:35:21.383839161Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kuberne
tes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-vtlkw\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cb7c6c14-13c5-46fe-be06-c0ee5259bfd9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-vtlkw_cb7c6c14-13c5-46fe-be06-c0ee5259bfd9/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/00bc88140bd0549fb76ad5496d8a7ea43671d782b6564366f894dab574f39c0b/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-vtlkw_kube-system_cb7c6c14-13c5-46fe-be06-c0ee5259bfd9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c990111658782660f4a04737c5617307315a8ead4554a818af292316b336fd0a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c990111658782660f4a04737c5617307315a8ead4554a818af292316b336fd0a","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-vtlkw_kube-system_cb7c6c14-13c5-46fe-be06-c
0ee5259bfd9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cb7c6c14-13c5-46fe-be06-c0ee5259bfd9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cb7c6c14-13c5-46fe-be06-c0ee5259bfd9/containers/kindnet-cni/0a7c2db8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"contain
er_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/cb7c6c14-13c5-46fe-be06-c0ee5259bfd9/volumes/kubernetes.io~secret/kindnet-token-srmtj\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-vtlkw","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cb7c6c14-13c5-46fe-be06-c0ee5259bfd9","kubernetes.io/config.seen":"2023-12-12T00:35:19.184372515Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ff8f0eb44727154b75577edcbe42bd079daebaf3ab30852d958fbb8e0f0324b7","pid":2243,"status":"running","bundle":"/run/containers/storage/overlay-containers/ff8f0eb44727154b75577edcbe42bd079daebaf3ab30852d958fbb8e0f0324b7/userdata","rootfs":"/var/lib/containers/storage/overlay/71c98c0ca51b0dafaaa0f885327c6f9fe94f897b0e225b6b0d4fe2057a16889d/merged","created":"2023-12-12T00:35:32.257471937Z","annotations":{"io.container.manager":"cri-o"
,"io.kubernetes.container.hash":"cb226451","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cb226451\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ff8f0eb44727154b75577edcbe42bd079daebaf3ab30852d958fbb8e0f0324b7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-12T00:35:32.212663071Z","io.kubernetes.cri-o.Image":"gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io
.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9610964a-cbe8-4bdd-9b4f-f4438f39b894\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_9610964a-cbe8-4bdd-9b4f-f4438f39b894/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/71c98c0ca51b0dafaaa0f885327c6f9fe94f897b0e225b6b0d4fe2057a16889d/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_9610964a-cbe8-4bdd-9b4f-f4438f39b894_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/61e2a4828b083fac3d44f793f3db7d67e1a4651911323c508ea7cf5306f52d36/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"
61e2a4828b083fac3d44f793f3db7d67e1a4651911323c508ea7cf5306f52d36","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_9610964a-cbe8-4bdd-9b4f-f4438f39b894_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9610964a-cbe8-4bdd-9b4f-f4438f39b894/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9610964a-cbe8-4bdd-9b4f-f4438f39b894/containers/storage-provisioner/78f13e40\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9610964a-cbe8-4bdd-9b4f-f
4438f39b894/volumes/kubernetes.io~secret/storage-provisioner-token-mf494\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9610964a-cbe8-4bdd-9b4f-f4438f39b894","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Dire
ctory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-12-12T00:35:29.589447506Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I1212 00:35:39.414712 1147090 cri.go:126] list returned 8 containers
	I1212 00:35:39.414730 1147090 cri.go:129] container: {ID:37cc807eb8db0b61a416564775bbeecb1cea6629f4a34a259723e681c4a15aca Status:running}
	I1212 00:35:39.414756 1147090 cri.go:135] skipping {37cc807eb8db0b61a416564775bbeecb1cea6629f4a34a259723e681c4a15aca running}: state = "running", want "paused"
	I1212 00:35:39.414773 1147090 cri.go:129] container: {ID:3b77b68bac2b04afc3e7d721d997ed77ddd55c75453a2536c06e9f802f3f8a01 Status:running}
	I1212 00:35:39.414780 1147090 cri.go:135] skipping {3b77b68bac2b04afc3e7d721d997ed77ddd55c75453a2536c06e9f802f3f8a01 running}: state = "running", want "paused"
	I1212 00:35:39.414787 1147090 cri.go:129] container: {ID:98591e814415a7f68a501b413ac3dea0b90d3e1f3d46ecf22ae957d501b471d1 Status:running}
	I1212 00:35:39.414796 1147090 cri.go:135] skipping {98591e814415a7f68a501b413ac3dea0b90d3e1f3d46ecf22ae957d501b471d1 running}: state = "running", want "paused"
	I1212 00:35:39.414802 1147090 cri.go:129] container: {ID:b37841b0ca7e6c583e5f1b2bf62b18bba025f9ac412204bf622cf40da1944da1 Status:running}
	I1212 00:35:39.414813 1147090 cri.go:135] skipping {b37841b0ca7e6c583e5f1b2bf62b18bba025f9ac412204bf622cf40da1944da1 running}: state = "running", want "paused"
	I1212 00:35:39.414831 1147090 cri.go:129] container: {ID:ccc7574d027de156202827c2d3c6f2f08c572f0da026556be7af1066e9f751ea Status:running}
	I1212 00:35:39.414845 1147090 cri.go:135] skipping {ccc7574d027de156202827c2d3c6f2f08c572f0da026556be7af1066e9f751ea running}: state = "running", want "paused"
	I1212 00:35:39.414852 1147090 cri.go:129] container: {ID:fa5a904c833a9ec3d6a6ecb36751bb27ec22964245bbc48fc71c4c8ef086ed32 Status:running}
	I1212 00:35:39.414869 1147090 cri.go:135] skipping {fa5a904c833a9ec3d6a6ecb36751bb27ec22964245bbc48fc71c4c8ef086ed32 running}: state = "running", want "paused"
	I1212 00:35:39.414882 1147090 cri.go:129] container: {ID:fe0391b007c1a6fc35e858c2018dbca95ee2d82e45f004a50d9e9b5c92625d45 Status:running}
	I1212 00:35:39.414889 1147090 cri.go:135] skipping {fe0391b007c1a6fc35e858c2018dbca95ee2d82e45f004a50d9e9b5c92625d45 running}: state = "running", want "paused"
	I1212 00:35:39.414897 1147090 cri.go:129] container: {ID:ff8f0eb44727154b75577edcbe42bd079daebaf3ab30852d958fbb8e0f0324b7 Status:running}
	I1212 00:35:39.414904 1147090 cri.go:135] skipping {ff8f0eb44727154b75577edcbe42bd079daebaf3ab30852d958fbb8e0f0324b7 running}: state = "running", want "paused"
	I1212 00:35:39.418100 1147090 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1212 00:35:39.420243 1147090 config.go:182] Loaded profile config "ingress-addon-legacy-996779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 00:35:39.420265 1147090 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-996779"
	I1212 00:35:39.420273 1147090 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-996779"
	I1212 00:35:39.420327 1147090 host.go:66] Checking if "ingress-addon-legacy-996779" exists ...
	I1212 00:35:39.420773 1147090 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:35:39.441310 1147090 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1212 00:35:39.443519 1147090 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1212 00:35:39.445506 1147090 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1212 00:35:39.447727 1147090 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:35:39.447754 1147090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1212 00:35:39.447834 1147090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:35:39.466329 1147090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:35:39.582285 1147090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:35:40.032146 1147090 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-996779"
	I1212 00:35:40.034040 1147090 out.go:177] * Verifying ingress addon...
	I1212 00:35:40.036877 1147090 kapi.go:59] client config for ingress-addon-legacy-996779: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:35:40.037664 1147090 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 00:35:40.038140 1147090 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 00:35:40.064062 1147090 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 00:35:40.064090 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:40.069845 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:40.574144 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:41.074325 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:41.574743 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:42.074187 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:42.574371 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:43.074076 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:43.574433 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:44.073928 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:44.574276 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:45.073957 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:45.574291 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:46.074632 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:46.574049 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:47.075031 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:47.574228 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:48.074741 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:48.573855 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:49.073839 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:49.574104 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:50.074601 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:50.573790 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:51.074150 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:51.574515 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:52.074139 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:52.574445 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:53.074533 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:53.573666 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:54.074774 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:54.574016 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:55.075212 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:55.574593 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:56.073716 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:56.574579 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:57.073929 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:57.573772 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:58.073954 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:58.573819 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:59.073677 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:35:59.574258 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:00.080590 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:00.573611 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:01.074527 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:01.573639 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:02.074544 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:02.573655 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:03.073935 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:03.574055 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:04.074665 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:04.573966 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:05.074889 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:05.573629 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:06.073657 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:06.573802 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:07.073952 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:07.574474 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:08.073797 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:08.573572 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:09.074599 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:09.574456 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:10.074115 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:10.574522 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:11.073994 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:11.574439 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:12.073654 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:12.574563 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:13.073691 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:13.574630 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:14.073886 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:14.574036 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:15.074935 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:15.573866 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:16.074099 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:16.574210 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:17.074729 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:17.574077 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:18.074925 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:18.574313 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:19.074551 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:19.574456 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:20.074093 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:20.574354 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:21.073800 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:21.574875 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:22.073779 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:22.573784 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:23.074290 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:23.574464 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:24.073762 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:24.575184 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:25.074258 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:25.574272 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:26.073778 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:26.573691 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:27.074132 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:27.574015 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:28.074423 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:28.573784 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:29.074644 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:29.574347 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:30.074587 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:30.573473 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:31.073664 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:31.573769 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:32.073983 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:32.573810 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:33.073978 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:33.574357 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:34.073772 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:34.574088 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:35.074619 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:35.573650 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:36.074713 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:36.573915 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:37.074264 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:37.574466 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:38.073660 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:38.574251 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:39.074531 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:39.574169 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:40.074300 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:40.574114 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:41.074416 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:41.573915 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:42.074386 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:42.573589 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:43.074068 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:43.574615 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:44.073931 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:44.574415 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:45.073828 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:45.573691 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:46.074476 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:46.574006 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:47.074389 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:47.574044 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:48.074065 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:48.574184 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:49.074321 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:49.573782 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:50.074290 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:50.574648 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:51.073777 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:51.574014 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:52.074464 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:52.573760 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:53.075185 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:53.574732 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:54.073736 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:54.574124 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:55.074064 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:55.574520 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:56.073747 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:56.573853 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:57.076135 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:57.574336 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:58.074117 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:58.574308 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:59.073712 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:36:59.574200 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:00.075190 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:00.574307 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:01.075239 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:01.574136 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:02.074141 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:02.574591 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:03.073761 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:03.574095 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:04.079360 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:04.573748 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:05.073879 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:05.574501 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:06.073634 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:06.574383 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:07.073683 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:07.573629 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:08.073611 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:08.573611 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:09.074008 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:09.573554 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:10.074202 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:10.574391 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:11.073657 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:11.573702 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:12.073711 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:12.574464 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:13.073749 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:13.573407 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:14.073721 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:14.574084 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:15.074550 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:15.573712 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:16.073787 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:16.573804 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:17.074166 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:17.574002 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:18.074368 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:18.573657 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:19.074626 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:19.574112 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:20.074334 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:20.573615 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:21.073614 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:21.573912 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:22.073978 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:22.574380 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:23.073709 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:23.573590 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:24.073684 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:24.573638 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:25.073781 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:25.573905 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:26.074516 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:26.573577 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:27.073924 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:27.574538 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:28.073724 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:28.573790 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:29.074210 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:29.573819 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:30.075391 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:30.573740 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:31.073686 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:31.574319 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:32.073816 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:32.573779 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:33.073723 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:33.573743 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:34.073874 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:34.573714 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:35.074025 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:35.574096 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:36.074174 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:36.574426 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:37.073640 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:37.574483 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:38.073666 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:38.573681 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:39.073707 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:39.574537 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:40.079658 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:40.573828 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:41.073693 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:41.573605 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:42.073697 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:42.574345 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:43.074974 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:43.574285 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:44.074526 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:44.573609 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:45.074165 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:45.574776 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:46.073825 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:46.574075 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:47.075121 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:47.574080 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:48.074671 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:48.573827 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:49.073450 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:49.573923 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:50.074312 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:50.573421 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:51.073654 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:51.574447 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:52.073527 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:52.573865 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:53.074145 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:53.574017 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:54.074267 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:54.574518 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:55.073676 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:55.573602 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:56.073724 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:56.574312 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:57.073692 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:57.574027 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:58.074181 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:58.574540 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:59.073697 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:37:59.573568 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:00.084010 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:00.574071 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:01.074796 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:01.573883 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:02.074499 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:02.573619 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:03.073937 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:03.574109 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:04.074251 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:04.573521 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:05.074574 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:05.573715 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:06.074028 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:06.574447 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:07.073633 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:07.573502 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:08.073703 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:08.573714 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:09.074724 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:09.574302 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:10.074702 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:10.573635 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:11.073619 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:11.574115 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:12.074149 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:12.574401 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:13.073898 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:13.574143 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:14.074669 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:14.574093 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:15.074647 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:15.574743 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:16.073910 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:16.574005 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:17.074443 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:17.574149 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:18.073781 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:18.574698 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:19.073669 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:19.574144 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:20.074559 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:20.573661 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:21.073778 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:21.574433 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:22.073628 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:22.573842 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:23.073775 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:23.573972 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:24.074029 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:24.573483 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:25.074735 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:25.573635 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:26.073716 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:26.573499 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:27.073669 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:27.573935 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:28.074016 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:28.574073 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:29.074195 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:29.573688 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:30.074709 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:30.573719 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:31.073999 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:31.574258 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:32.075014 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:32.574020 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:33.074419 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:33.573532 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:34.073671 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:34.574024 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:35.074404 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:35.573618 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:36.073864 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:36.574174 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:37.074578 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:37.573478 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:38.073777 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:38.574596 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:39.073583 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:39.574169 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:40.074291 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:40.573525 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:41.073496 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:41.573722 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:42.074262 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:42.574515 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:43.073667 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:43.573774 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:44.074874 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:44.574130 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:45.074489 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:45.573557 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:46.073783 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:46.579398 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:47.073831 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:47.574335 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:48.073688 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:48.573536 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:49.073874 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:49.573959 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:50.074340 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:50.573484 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:51.073510 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:51.574581 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:52.073789 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:52.574506 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:53.073704 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:53.573837 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:54.075029 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:54.574447 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:55.073845 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:55.573976 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:56.074276 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:56.574456 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:57.073712 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:57.573819 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:58.074081 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:58.574031 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:59.074311 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:38:59.573504 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:00.075511 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:00.573651 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:01.074040 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:01.574246 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:02.074713 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:02.579265 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:03.076640 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:03.573991 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:04.074430 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:04.573851 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:05.074589 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:05.573894 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:06.074026 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:06.574505 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:07.073755 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:07.573978 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:08.074383 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:08.573856 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:09.074386 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:09.574006 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:10.074671 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:10.573640 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:11.073640 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:11.573783 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:12.073941 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:12.574140 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:13.074516 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:13.573620 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:14.073892 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:14.573507 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:15.074694 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:15.574378 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:16.073778 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:16.574073 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:17.074812 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:17.574052 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:18.074372 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:18.573695 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:19.074029 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:19.574422 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:20.074554 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:20.576947 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:21.074511 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:21.573667 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:22.074032 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:22.574201 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:23.075015 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:23.574196 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:24.074467 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:24.573799 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:25.074250 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:25.573999 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:26.074405 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:26.573514 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:27.073725 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:27.574599 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:28.073791 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:28.574469 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:29.073637 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:29.574770 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:30.074541 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:30.573672 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:31.074091 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:31.574683 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:32.074063 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:32.574248 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:33.074642 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:33.574108 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:34.074824 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:34.574089 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:35.074551 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:35.573574 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:36.073905 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:36.574346 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:37.073729 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:37.573946 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:38.073649 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:38.575334 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:39.074345 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:39.573957 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:40.074224 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:40.574490 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:41.073608 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:41.573705 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:42.074256 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:42.574380 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:43.073924 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:43.574229 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:44.074383 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:44.573458 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:45.073895 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:45.574474 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:46.073766 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:46.574002 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:47.073942 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:47.574303 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:48.073598 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:48.573788 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:49.074116 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:49.574193 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:50.074300 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:50.574511 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:51.073600 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:51.573763 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:52.074099 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:52.574516 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:53.074220 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:53.579421 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:54.074019 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:54.573730 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:55.073981 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:55.574117 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:56.074379 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:56.573759 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:57.074683 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:57.573785 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:58.073985 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:58.574309 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:59.073732 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:39:59.574339 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:00.074487 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:00.573555 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:01.073839 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:01.573998 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:02.074638 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:02.573543 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:03.073668 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:03.573886 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:04.074297 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:04.573899 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:05.074367 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:05.573511 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:06.073792 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:06.573956 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:07.074454 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:07.573563 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:08.073578 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:08.573823 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:09.074098 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:09.574864 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:10.074477 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:10.573638 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:11.074682 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:11.573761 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:12.073620 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:12.574081 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:13.074300 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:13.574607 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:14.073882 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:14.574344 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:15.073980 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:15.574372 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:16.073675 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:16.573591 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:17.073686 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:17.573645 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:18.074544 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:18.573744 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:19.074012 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:19.574471 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:20.073816 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:20.573891 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:21.073946 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:21.573705 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:22.073846 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:22.574100 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:23.074763 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:23.574003 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:24.074233 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:24.573663 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:25.074728 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:25.573774 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:26.073950 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:26.574581 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:27.073878 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:27.574147 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:28.074234 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:28.574597 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:29.073727 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:29.574477 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:30.074600 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:30.573741 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:31.073877 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:31.574072 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:32.074312 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:32.573668 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:33.074676 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:33.573626 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:34.073819 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:34.574081 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:35.074184 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:35.574376 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:36.073566 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:36.573807 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:37.073470 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:37.573588 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:38.073814 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:38.574175 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:39.074391 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:39.573798 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:40.074308 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:40.573541 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:41.075218 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:41.575399 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:42.075075 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:42.574030 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:43.074119 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:43.574387 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:44.073707 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:44.573747 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:45.074095 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:45.573848 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:46.074193 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:46.574393 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:47.074263 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:47.574533 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:48.073853 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:48.573605 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:49.074373 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:49.573817 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:50.075045 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:50.574317 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:51.074730 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:51.573619 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:52.073790 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:52.573900 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:53.074390 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:53.573683 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:54.074733 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:54.574125 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:55.074535 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:55.573889 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:56.074015 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:56.574579 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:57.075035 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:57.573956 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:58.074167 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:58.574504 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:59.073627 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:40:59.574096 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:00.076726 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:00.574018 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:01.074386 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:01.573821 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:02.074264 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:02.573712 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:03.074259 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:03.574343 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:04.074172 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:04.574640 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:05.073745 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:05.574567 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:06.073871 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:06.574231 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:07.075628 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:07.573922 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:08.074337 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:08.573677 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:09.073934 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:09.573627 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:10.074216 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:10.574552 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:11.073784 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:11.574110 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:12.074261 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:12.574370 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:13.076396 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:13.574014 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:14.074370 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:14.573750 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:15.074189 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:15.574349 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:16.073643 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:16.573799 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:17.074071 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:17.574531 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:18.073993 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:18.574062 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:19.074488 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:19.574133 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:20.074466 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:20.573767 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:21.074077 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:21.574357 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:22.074365 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:22.573666 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:23.073853 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:23.574155 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:24.073999 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:24.573694 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:25.074404 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:25.573678 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:26.073987 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:26.574200 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:27.074805 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:27.574211 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:28.074347 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:28.573747 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:29.074190 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:29.573689 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:30.074291 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:30.573474 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:31.074415 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:31.573752 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:32.073786 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:32.573775 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:33.073897 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:33.574079 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:34.074079 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:34.573668 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:35.073974 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:35.573933 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:36.074275 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:36.574519 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:37.073639 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:37.573838 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:38.073722 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:38.574557 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:39.073647 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:39.574067 1147090 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:41:40.038718 1147090 kapi.go:107] duration metric: took 6m0.000564997s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 00:41:40.040955 1147090 out.go:177] 
	W1212 00:41:40.043171 1147090 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	W1212 00:41:40.043210 1147090 out.go:239] * 
	* 
	W1212 00:41:40.049562 1147090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:41:40.051413 1147090 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-996779
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-996779:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f",
	        "Created": "2023-12-12T00:34:28.478651304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1144572,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:34:28.797498141Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f/hosts",
	        "LogPath": "/var/lib/docker/containers/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f-json.log",
	        "Name": "/ingress-addon-legacy-996779",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-996779:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-996779",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e1af87c87d6d78c2400b828243f7fb4c87923638674dcc5d52a4d7aa9185ab9-init/diff:/var/lib/docker/overlay2/c2a4fdcea722509eecd2151e38f63a7bf15f9db138183afe352dd4d4bae4600f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e1af87c87d6d78c2400b828243f7fb4c87923638674dcc5d52a4d7aa9185ab9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e1af87c87d6d78c2400b828243f7fb4c87923638674dcc5d52a4d7aa9185ab9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e1af87c87d6d78c2400b828243f7fb4c87923638674dcc5d52a4d7aa9185ab9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-996779",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-996779/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-996779",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-996779",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-996779",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a21320cf5ac119656128f0640ea803e37f5e213873309801c6b0850578ca9984",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34025"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34021"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34023"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34022"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a21320cf5ac1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-996779": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "33133c4cc738",
	                        "ingress-addon-legacy-996779"
	                    ],
	                    "NetworkID": "f93f4e79528b1f2d8a4fa7837ba29fe1e4897fa1f29fb970b286a7a56eb6350c",
	                    "EndpointID": "4054e004ff382f0fb204dc7c0d06d49a25951ed54983ad3b4df6e06afeaaa4df",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-996779 -n ingress-addon-legacy-996779
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddonActivation FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-996779 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-996779 logs -n 25: (1.411492951s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-885247 image rm                                             | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-885247               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-885247 image ls                                             | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	| image          | functional-885247 image load                                           | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-885247 image ls                                             | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	| image          | functional-885247 image save --daemon                                  | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-885247               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /etc/test/nested/copy/1117383/hosts                                    |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /etc/ssl/certs/1117383.pem                                             |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /usr/share/ca-certificates/1117383.pem                                 |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /etc/ssl/certs/11173832.pem                                            |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /usr/share/ca-certificates/11173832.pem                                |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                             |         |         |                     |                     |
	| image          | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh pgrep                                            | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-885247 image build -t                                       | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | localhost/my-image:functional-885247                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-885247 image ls                                             | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	| image          | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| delete         | -p functional-885247                                                   | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:34 UTC | 12 Dec 23 00:34 UTC |
	| start          | -p ingress-addon-legacy-996779                                         | ingress-addon-legacy-996779 | jenkins | v1.32.0 | 12 Dec 23 00:34 UTC | 12 Dec 23 00:35 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-996779                                            | ingress-addon-legacy-996779 | jenkins | v1.32.0 | 12 Dec 23 00:35 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:34:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:34:10.862709 1144110 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:34:10.862919 1144110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:10.862931 1144110 out.go:309] Setting ErrFile to fd 2...
	I1212 00:34:10.862936 1144110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:10.863265 1144110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:34:10.863776 1144110 out.go:303] Setting JSON to false
	I1212 00:34:10.864722 1144110 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26197,"bootTime":1702315054,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:34:10.864803 1144110 start.go:138] virtualization:  
	I1212 00:34:10.867536 1144110 out.go:177] * [ingress-addon-legacy-996779] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:34:10.870316 1144110 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:34:10.872310 1144110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:34:10.870490 1144110 notify.go:220] Checking for updates...
	I1212 00:34:10.876400 1144110 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:34:10.878485 1144110 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:34:10.880825 1144110 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:34:10.883726 1144110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:34:10.886952 1144110 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:34:10.910961 1144110 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:34:10.911078 1144110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:11.015400 1144110 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-12 00:34:11.00571572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:34:11.015503 1144110 docker.go:295] overlay module found
	I1212 00:34:11.017862 1144110 out.go:177] * Using the docker driver based on user configuration
	I1212 00:34:11.020004 1144110 start.go:298] selected driver: docker
	I1212 00:34:11.020019 1144110 start.go:902] validating driver "docker" against <nil>
	I1212 00:34:11.020031 1144110 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:34:11.020653 1144110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:11.085552 1144110 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-12 00:34:11.076181789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:34:11.085713 1144110 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:34:11.085948 1144110 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:11.087954 1144110 out.go:177] * Using Docker driver with root privileges
	I1212 00:34:11.090349 1144110 cni.go:84] Creating CNI manager for ""
	I1212 00:34:11.090371 1144110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:11.090382 1144110 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:34:11.090398 1144110 start_flags.go:323] config:
	{Name:ingress-addon-legacy-996779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996779 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:34:11.092682 1144110 out.go:177] * Starting control plane node ingress-addon-legacy-996779 in cluster ingress-addon-legacy-996779
	I1212 00:34:11.094375 1144110 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 00:34:11.095954 1144110 out.go:177] * Pulling base image ...
	I1212 00:34:11.097650 1144110 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 00:34:11.097711 1144110 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:34:11.114872 1144110 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon, skipping pull
	I1212 00:34:11.114911 1144110 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in daemon, skipping load
	I1212 00:34:11.167645 1144110 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1212 00:34:11.167674 1144110 cache.go:56] Caching tarball of preloaded images
	I1212 00:34:11.167844 1144110 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 00:34:11.170152 1144110 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1212 00:34:11.171959 1144110 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:34:11.283342 1144110 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1212 00:34:20.638485 1144110 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:34:20.638613 1144110 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:34:21.830955 1144110 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1212 00:34:21.831362 1144110 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/config.json ...
	I1212 00:34:21.831394 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/config.json: {Name:mk9224e714fa93329a657ba7e5eaebf2850d6949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:21.831580 1144110 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:34:21.831638 1144110 start.go:365] acquiring machines lock for ingress-addon-legacy-996779: {Name:mk96b53ba9ba8ba029ff3fcdb15e7bcfc32e7d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:34:21.831697 1144110 start.go:369] acquired machines lock for "ingress-addon-legacy-996779" in 46.563µs
	I1212 00:34:21.831720 1144110 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-996779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996779 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:34:21.831796 1144110 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:34:21.834153 1144110 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 00:34:21.834378 1144110 start.go:159] libmachine.API.Create for "ingress-addon-legacy-996779" (driver="docker")
	I1212 00:34:21.834410 1144110 client.go:168] LocalClient.Create starting
	I1212 00:34:21.834476 1144110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem
	I1212 00:34:21.834508 1144110 main.go:141] libmachine: Decoding PEM data...
	I1212 00:34:21.834527 1144110 main.go:141] libmachine: Parsing certificate...
	I1212 00:34:21.834582 1144110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem
	I1212 00:34:21.834605 1144110 main.go:141] libmachine: Decoding PEM data...
	I1212 00:34:21.834620 1144110 main.go:141] libmachine: Parsing certificate...
	I1212 00:34:21.834962 1144110 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-996779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:34:21.851774 1144110 cli_runner.go:211] docker network inspect ingress-addon-legacy-996779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:34:21.851870 1144110 network_create.go:281] running [docker network inspect ingress-addon-legacy-996779] to gather additional debugging logs...
	I1212 00:34:21.851892 1144110 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-996779
	W1212 00:34:21.868744 1144110 cli_runner.go:211] docker network inspect ingress-addon-legacy-996779 returned with exit code 1
	I1212 00:34:21.868778 1144110 network_create.go:284] error running [docker network inspect ingress-addon-legacy-996779]: docker network inspect ingress-addon-legacy-996779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-996779 not found
	I1212 00:34:21.868792 1144110 network_create.go:286] output of [docker network inspect ingress-addon-legacy-996779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-996779 not found
	
	** /stderr **
	I1212 00:34:21.868929 1144110 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:34:21.886412 1144110 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40020f63f0}
	I1212 00:34:21.886446 1144110 network_create.go:124] attempt to create docker network ingress-addon-legacy-996779 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 00:34:21.886506 1144110 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-996779 ingress-addon-legacy-996779
	I1212 00:34:21.960112 1144110 network_create.go:108] docker network ingress-addon-legacy-996779 192.168.49.0/24 created
	I1212 00:34:21.960146 1144110 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-996779" container
	I1212 00:34:21.960217 1144110 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:34:21.976636 1144110 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-996779 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996779 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:34:21.994796 1144110 oci.go:103] Successfully created a docker volume ingress-addon-legacy-996779
	I1212 00:34:21.994882 1144110 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-996779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996779 --entrypoint /usr/bin/test -v ingress-addon-legacy-996779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib
	I1212 00:34:23.532043 1144110 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-996779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996779 --entrypoint /usr/bin/test -v ingress-addon-legacy-996779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib: (1.537116156s)
	I1212 00:34:23.532072 1144110 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-996779
	I1212 00:34:23.532090 1144110 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 00:34:23.532111 1144110 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:34:23.532199 1144110 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-996779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:34:28.399111 1144110 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-996779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir: (4.866861834s)
	I1212 00:34:28.399144 1144110 kic.go:203] duration metric: took 4.867031 seconds to extract preloaded images to volume
	W1212 00:34:28.399279 1144110 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:34:28.399389 1144110 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:34:28.462805 1144110 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-996779 --name ingress-addon-legacy-996779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-996779 --network ingress-addon-legacy-996779 --ip 192.168.49.2 --volume ingress-addon-legacy-996779:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 00:34:28.806711 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Running}}
	I1212 00:34:28.833695 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:34:28.859343 1144110 cli_runner.go:164] Run: docker exec ingress-addon-legacy-996779 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:34:28.922216 1144110 oci.go:144] the created container "ingress-addon-legacy-996779" has a running status.
	I1212 00:34:28.922246 1144110 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa...
	I1212 00:34:29.250069 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 00:34:29.250136 1144110 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:34:29.280403 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:34:29.316990 1144110 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:34:29.317009 1144110 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-996779 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:34:29.394379 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:34:29.438229 1144110 machine.go:88] provisioning docker machine ...
	I1212 00:34:29.438263 1144110 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-996779"
	I1212 00:34:29.438330 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:29.476317 1144110 main.go:141] libmachine: Using SSH client type: native
	I1212 00:34:29.476760 1144110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34025 <nil> <nil>}
	I1212 00:34:29.476781 1144110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-996779 && echo "ingress-addon-legacy-996779" | sudo tee /etc/hostname
	I1212 00:34:29.477426 1144110 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35748->127.0.0.1:34025: read: connection reset by peer
	I1212 00:34:32.632142 1144110 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-996779
	
	I1212 00:34:32.632224 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:32.651152 1144110 main.go:141] libmachine: Using SSH client type: native
	I1212 00:34:32.651565 1144110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34025 <nil> <nil>}
	I1212 00:34:32.651590 1144110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-996779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-996779/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-996779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:34:32.794510 1144110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:34:32.794539 1144110 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 00:34:32.794558 1144110 ubuntu.go:177] setting up certificates
	I1212 00:34:32.794567 1144110 provision.go:83] configureAuth start
	I1212 00:34:32.794627 1144110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996779
	I1212 00:34:32.812330 1144110 provision.go:138] copyHostCerts
	I1212 00:34:32.812369 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 00:34:32.812400 1144110 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem, removing ...
	I1212 00:34:32.812412 1144110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 00:34:32.812486 1144110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 00:34:32.812568 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 00:34:32.812591 1144110 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem, removing ...
	I1212 00:34:32.812600 1144110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 00:34:32.812629 1144110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 00:34:32.812680 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 00:34:32.812703 1144110 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem, removing ...
	I1212 00:34:32.812711 1144110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 00:34:32.812735 1144110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 00:34:32.812788 1144110 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-996779 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-996779]
	I1212 00:34:33.611671 1144110 provision.go:172] copyRemoteCerts
	I1212 00:34:33.611742 1144110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:34:33.611788 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:33.629657 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:34:33.731648 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:34:33.731718 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:34:33.759442 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:34:33.759505 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:34:33.787896 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:34:33.787959 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 00:34:33.815636 1144110 provision.go:86] duration metric: configureAuth took 1.021055131s
	I1212 00:34:33.815662 1144110 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:34:33.815855 1144110 config.go:182] Loaded profile config "ingress-addon-legacy-996779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 00:34:33.815960 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:33.835791 1144110 main.go:141] libmachine: Using SSH client type: native
	I1212 00:34:33.836217 1144110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34025 <nil> <nil>}
	I1212 00:34:33.836238 1144110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:34:34.113153 1144110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:34:34.113177 1144110 machine.go:91] provisioned docker machine in 4.674922886s
	I1212 00:34:34.113188 1144110 client.go:171] LocalClient.Create took 12.27877161s
	I1212 00:34:34.113205 1144110 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-996779" took 12.278827215s
	I1212 00:34:34.113213 1144110 start.go:300] post-start starting for "ingress-addon-legacy-996779" (driver="docker")
	I1212 00:34:34.113225 1144110 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:34:34.113317 1144110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:34:34.113366 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:34.131670 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:34:34.233803 1144110 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:34:34.237958 1144110 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:34:34.237992 1144110 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:34:34.238003 1144110 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:34:34.238011 1144110 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:34:34.238025 1144110 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 00:34:34.238091 1144110 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 00:34:34.238184 1144110 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> 11173832.pem in /etc/ssl/certs
	I1212 00:34:34.238196 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> /etc/ssl/certs/11173832.pem
	I1212 00:34:34.238306 1144110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:34:34.248661 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 00:34:34.277071 1144110 start.go:303] post-start completed in 163.841827ms
	I1212 00:34:34.277509 1144110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996779
	I1212 00:34:34.295580 1144110 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/config.json ...
	I1212 00:34:34.295862 1144110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:34:34.295903 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:34.313421 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:34:34.411302 1144110 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:34:34.416960 1144110 start.go:128] duration metric: createHost completed in 12.585148374s
	I1212 00:34:34.416987 1144110 start.go:83] releasing machines lock for "ingress-addon-legacy-996779", held for 12.585275485s
	I1212 00:34:34.417058 1144110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996779
	I1212 00:34:34.435971 1144110 ssh_runner.go:195] Run: cat /version.json
	I1212 00:34:34.436019 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:34.436274 1144110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:34:34.436329 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:34.456111 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:34:34.457020 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:34:34.550005 1144110 ssh_runner.go:195] Run: systemctl --version
	I1212 00:34:34.688090 1144110 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:34:34.837391 1144110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:34:34.842882 1144110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:34:34.868378 1144110 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:34:34.868467 1144110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:34:34.907656 1144110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 00:34:34.907677 1144110 start.go:475] detecting cgroup driver to use...
	I1212 00:34:34.907710 1144110 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:34:34.907759 1144110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:34:34.926115 1144110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:34:34.939372 1144110 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:34:34.939487 1144110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:34:34.955406 1144110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:34:34.972660 1144110 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:34:35.066963 1144110 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:34:35.166100 1144110 docker.go:219] disabling docker service ...
	I1212 00:34:35.166223 1144110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:34:35.188658 1144110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:34:35.203688 1144110 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:34:35.307141 1144110 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:34:35.410757 1144110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:34:35.423773 1144110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:34:35.443225 1144110 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 00:34:35.443317 1144110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:35.455180 1144110 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:34:35.455266 1144110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:35.467528 1144110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:35.479417 1144110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:35.491658 1144110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:34:35.502946 1144110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:34:35.512836 1144110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:34:35.522852 1144110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:35.617677 1144110 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:34:35.742892 1144110 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:34:35.743008 1144110 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:34:35.748320 1144110 start.go:543] Will wait 60s for crictl version
	I1212 00:34:35.748385 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:35.752685 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:34:35.798640 1144110 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 00:34:35.798731 1144110 ssh_runner.go:195] Run: crio --version
	I1212 00:34:35.846063 1144110 ssh_runner.go:195] Run: crio --version
	I1212 00:34:35.888813 1144110 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1212 00:34:35.890782 1144110 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-996779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:34:35.907867 1144110 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:34:35.912533 1144110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:34:35.925916 1144110 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 00:34:35.925990 1144110 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:34:35.975454 1144110 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 00:34:35.975532 1144110 ssh_runner.go:195] Run: which lz4
	I1212 00:34:35.979874 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1212 00:34:35.979968 1144110 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 00:34:35.984122 1144110 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 00:34:35.984156 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1212 00:34:38.096266 1144110 crio.go:444] Took 2.116328 seconds to copy over tarball
	I1212 00:34:38.096342 1144110 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 00:34:40.811421 1144110 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.715048176s)
	I1212 00:34:40.811452 1144110 crio.go:451] Took 2.715161 seconds to extract the tarball
	I1212 00:34:40.811463 1144110 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 00:34:40.909383 1144110 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:34:40.952498 1144110 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 00:34:40.952520 1144110 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 00:34:40.952585 1144110 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:40.952778 1144110 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:34:40.952850 1144110 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:34:40.952936 1144110 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:34:40.953020 1144110 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 00:34:40.953086 1144110 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1212 00:34:40.953145 1144110 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 00:34:40.953272 1144110 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1212 00:34:40.954174 1144110 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 00:34:40.954649 1144110 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:34:40.954913 1144110 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:34:40.955241 1144110 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:34:40.955294 1144110 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:40.955331 1144110 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1212 00:34:40.955373 1144110 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 00:34:40.955405 1144110 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	W1212 00:34:41.297348 1144110 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.297601 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1212 00:34:41.321563 1144110 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.321798 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1212 00:34:41.339354 1144110 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.339557 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:34:41.345652 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1212 00:34:41.347960 1144110 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.348128 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1212 00:34:41.357305 1144110 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.357516 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1212 00:34:41.379182 1144110 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1212 00:34:41.379254 1144110 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 00:34:41.379319 1144110 ssh_runner.go:195] Run: which crictl
	W1212 00:34:41.381699 1144110 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.381876 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:34:41.454010 1144110 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1212 00:34:41.454052 1144110 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1212 00:34:41.454104 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.532646 1144110 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1212 00:34:41.532690 1144110 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:34:41.532737 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.532821 1144110 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1212 00:34:41.532837 1144110 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 00:34:41.532857 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.532923 1144110 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1212 00:34:41.532941 1144110 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:34:41.532966 1144110 ssh_runner.go:195] Run: which crictl
	W1212 00:34:41.551135 1144110 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.551328 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:41.578486 1144110 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1212 00:34:41.578527 1144110 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1212 00:34:41.578576 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.578658 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1212 00:34:41.578719 1144110 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1212 00:34:41.578733 1144110 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:34:41.578757 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.578807 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1212 00:34:41.578885 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:34:41.578925 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:34:41.578966 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:34:41.737312 1144110 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1212 00:34:41.737396 1144110 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:41.737482 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.741455 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1212 00:34:41.741533 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1212 00:34:41.741612 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1212 00:34:41.741647 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:34:41.741715 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1212 00:34:41.741770 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1212 00:34:41.741810 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1212 00:34:41.744612 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:41.797939 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1212 00:34:41.804800 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1212 00:34:41.839872 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 00:34:41.839942 1144110 cache_images.go:92] LoadImages completed in 887.409646ms
	W1212 00:34:41.840012 1144110 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1212 00:34:41.840085 1144110 ssh_runner.go:195] Run: crio config
	I1212 00:34:41.899848 1144110 cni.go:84] Creating CNI manager for ""
	I1212 00:34:41.899916 1144110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:41.899963 1144110 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:34:41.900005 1144110 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-996779 NodeName:ingress-addon-legacy-996779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 00:34:41.900229 1144110 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-996779"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:34:41.900321 1144110 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-996779 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996779 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 00:34:41.900421 1144110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1212 00:34:41.911241 1144110 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:34:41.911385 1144110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:34:41.921888 1144110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1212 00:34:41.943373 1144110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1212 00:34:41.964143 1144110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1212 00:34:41.984868 1144110 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:34:41.989386 1144110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:34:42.003745 1144110 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779 for IP: 192.168.49.2
	I1212 00:34:42.003783 1144110 certs.go:190] acquiring lock for shared ca certs: {Name:mk50788b4819ee46b65351495e43cdf246a6ddce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:42.004055 1144110 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key
	I1212 00:34:42.004131 1144110 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key
	I1212 00:34:42.004195 1144110 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key
	I1212 00:34:42.004208 1144110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt with IP's: []
	I1212 00:34:43.037408 1144110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt ...
	I1212 00:34:43.037441 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: {Name:mk6ac0e137aee842cbcd456a55f45a6647393aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.037640 1144110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key ...
	I1212 00:34:43.037655 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key: {Name:mk58861a542846cd448c54c06f1dc30fd1c29ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.037756 1144110 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key.dd3b5fb2
	I1212 00:34:43.037775 1144110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 00:34:43.318509 1144110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt.dd3b5fb2 ...
	I1212 00:34:43.318540 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt.dd3b5fb2: {Name:mk559eb62e656c481bd4787f0152c86b1ec62bb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.318728 1144110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key.dd3b5fb2 ...
	I1212 00:34:43.318743 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key.dd3b5fb2: {Name:mkaada065d27109ec7fc10d382956777a02ae880 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.318828 1144110 certs.go:337] copying /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt
	I1212 00:34:43.318907 1144110 certs.go:341] copying /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key
	I1212 00:34:43.318972 1144110 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.key
	I1212 00:34:43.318992 1144110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.crt with IP's: []
	I1212 00:34:43.709054 1144110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.crt ...
	I1212 00:34:43.709087 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.crt: {Name:mk91191acf61975e76fc6fae02d794df9458aeed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.709282 1144110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.key ...
	I1212 00:34:43.709304 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.key: {Name:mkfa5b06c0d332bfb588e2a8041b35a95d9b90c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.709393 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:34:43.709422 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:34:43.709433 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:34:43.709445 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:34:43.709462 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:34:43.709478 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:34:43.709490 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:34:43.709505 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:34:43.709556 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem (1338 bytes)
	W1212 00:34:43.709599 1144110 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383_empty.pem, impossibly tiny 0 bytes
	I1212 00:34:43.709613 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:34:43.709647 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:34:43.709679 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:34:43.709713 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem (1679 bytes)
	I1212 00:34:43.709763 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 00:34:43.709795 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> /usr/share/ca-certificates/11173832.pem
	I1212 00:34:43.709812 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:43.709826 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem -> /usr/share/ca-certificates/1117383.pem
	I1212 00:34:43.710402 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:34:43.741081 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:34:43.770533 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:34:43.799128 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:34:43.827279 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:34:43.855583 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:34:43.883278 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:34:43.911526 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:34:43.939579 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /usr/share/ca-certificates/11173832.pem (1708 bytes)
	I1212 00:34:43.967368 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:34:43.995334 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem --> /usr/share/ca-certificates/1117383.pem (1338 bytes)
	I1212 00:34:44.025370 1144110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:34:44.047075 1144110 ssh_runner.go:195] Run: openssl version
	I1212 00:34:44.054375 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11173832.pem && ln -fs /usr/share/ca-certificates/11173832.pem /etc/ssl/certs/11173832.pem"
	I1212 00:34:44.066183 1144110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11173832.pem
	I1212 00:34:44.070984 1144110 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:25 /usr/share/ca-certificates/11173832.pem
	I1212 00:34:44.071051 1144110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11173832.pem
	I1212 00:34:44.079787 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11173832.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:34:44.091645 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:34:44.103081 1144110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:44.107795 1144110 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:44.107882 1144110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:44.116613 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:34:44.128610 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117383.pem && ln -fs /usr/share/ca-certificates/1117383.pem /etc/ssl/certs/1117383.pem"
	I1212 00:34:44.140724 1144110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117383.pem
	I1212 00:34:44.145370 1144110 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:25 /usr/share/ca-certificates/1117383.pem
	I1212 00:34:44.145435 1144110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117383.pem
	I1212 00:34:44.154006 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117383.pem /etc/ssl/certs/51391683.0"
	I1212 00:34:44.165564 1144110 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:34:44.170083 1144110 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 00:34:44.170154 1144110 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-996779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996779 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:34:44.170252 1144110 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:34:44.170317 1144110 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:34:44.210428 1144110 cri.go:89] found id: ""
	I1212 00:34:44.210509 1144110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:34:44.221450 1144110 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:34:44.231886 1144110 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:34:44.232026 1144110 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:34:44.242539 1144110 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:34:44.242600 1144110 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:34:44.297748 1144110 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1212 00:34:44.298268 1144110 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 00:34:44.358100 1144110 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:34:44.358213 1144110 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1212 00:34:44.358274 1144110 kubeadm.go:322] OS: Linux
	I1212 00:34:44.358337 1144110 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 00:34:44.358409 1144110 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 00:34:44.358479 1144110 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 00:34:44.358559 1144110 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 00:34:44.358623 1144110 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 00:34:44.358702 1144110 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 00:34:44.449492 1144110 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:34:44.449701 1144110 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:34:44.449803 1144110 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:34:44.681330 1144110 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:34:44.682939 1144110 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:34:44.683189 1144110 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 00:34:44.793614 1144110 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:34:44.798353 1144110 out.go:204]   - Generating certificates and keys ...
	I1212 00:34:44.798479 1144110 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 00:34:44.798605 1144110 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 00:34:45.225195 1144110 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:34:45.746651 1144110 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:34:46.211611 1144110 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:34:46.498478 1144110 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 00:34:46.767332 1144110 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 00:34:46.767631 1144110 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-996779 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:34:46.886971 1144110 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 00:34:46.887317 1144110 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-996779 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:34:47.553043 1144110 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:34:47.774843 1144110 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:34:48.430223 1144110 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 00:34:48.430874 1144110 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:34:49.073708 1144110 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:34:49.203550 1144110 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:34:49.765906 1144110 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:34:49.997726 1144110 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:34:49.998441 1144110 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:34:50.005836 1144110 out.go:204]   - Booting up control plane ...
	I1212 00:34:50.005953 1144110 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:34:50.011906 1144110 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:34:50.013941 1144110 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:34:50.015483 1144110 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:34:50.018544 1144110 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:35:02.021668 1144110 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002655 seconds
	I1212 00:35:02.021785 1144110 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:35:02.034961 1144110 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:35:02.553706 1144110 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:35:02.553848 1144110 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-996779 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 00:35:03.061004 1144110 kubeadm.go:322] [bootstrap-token] Using token: z4ajwh.1qn32homr9mxalew
	I1212 00:35:03.063214 1144110 out.go:204]   - Configuring RBAC rules ...
	I1212 00:35:03.063351 1144110 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:35:03.073685 1144110 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:35:03.092866 1144110 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:35:03.097745 1144110 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:35:03.100867 1144110 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:35:03.108634 1144110 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:35:03.123434 1144110 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:35:03.430928 1144110 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 00:35:03.605721 1144110 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 00:35:03.607962 1144110 kubeadm.go:322] 
	I1212 00:35:03.608033 1144110 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 00:35:03.608062 1144110 kubeadm.go:322] 
	I1212 00:35:03.608140 1144110 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 00:35:03.608149 1144110 kubeadm.go:322] 
	I1212 00:35:03.608174 1144110 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 00:35:03.608230 1144110 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:35:03.608282 1144110 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:35:03.608290 1144110 kubeadm.go:322] 
	I1212 00:35:03.608340 1144110 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 00:35:03.608414 1144110 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:35:03.608497 1144110 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:35:03.608506 1144110 kubeadm.go:322] 
	I1212 00:35:03.608585 1144110 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:35:03.608665 1144110 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 00:35:03.608673 1144110 kubeadm.go:322] 
	I1212 00:35:03.608752 1144110 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z4ajwh.1qn32homr9mxalew \
	I1212 00:35:03.608865 1144110 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 \
	I1212 00:35:03.608891 1144110 kubeadm.go:322]     --control-plane 
	I1212 00:35:03.608899 1144110 kubeadm.go:322] 
	I1212 00:35:03.608999 1144110 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:35:03.609008 1144110 kubeadm.go:322] 
	I1212 00:35:03.609088 1144110 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z4ajwh.1qn32homr9mxalew \
	I1212 00:35:03.609190 1144110 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 
	I1212 00:35:03.610044 1144110 kubeadm.go:322] W1212 00:34:44.296984    1236 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1212 00:35:03.610262 1144110 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1212 00:35:03.610383 1144110 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:35:03.610508 1144110 kubeadm.go:322] W1212 00:34:50.012080    1236 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 00:35:03.610635 1144110 kubeadm.go:322] W1212 00:34:50.014067    1236 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 00:35:03.610652 1144110 cni.go:84] Creating CNI manager for ""
	I1212 00:35:03.610667 1144110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:03.613477 1144110 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:35:03.615823 1144110 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:35:03.625352 1144110 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1212 00:35:03.625371 1144110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:35:03.649557 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:35:04.139756 1144110 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:35:04.139846 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:04.139873 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4 minikube.k8s.io/name=ingress-addon-legacy-996779 minikube.k8s.io/updated_at=2023_12_12T00_35_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:04.284979 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:04.285047 1144110 ops.go:34] apiserver oom_adj: -16
	I1212 00:35:04.380524 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:04.972860 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:05.472513 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:05.973342 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:06.472489 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:06.973040 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:07.473370 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:07.972858 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:08.472400 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:08.972432 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:09.472461 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:09.972451 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:10.473271 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:10.973032 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:11.473040 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:11.973075 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:12.473119 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:12.972963 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:13.473386 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:13.973385 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:14.473392 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:14.972585 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:15.472917 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:15.972758 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:16.472457 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:16.973207 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:17.473191 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:17.972649 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:18.472943 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:18.571094 1144110 kubeadm.go:1088] duration metric: took 14.431322985s to wait for elevateKubeSystemPrivileges.
	I1212 00:35:18.571126 1144110 kubeadm.go:406] StartCluster complete in 34.400979067s
	I1212 00:35:18.571143 1144110 settings.go:142] acquiring lock: {Name:mk4639df610f4394c6679c82a1803a108086063e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:18.571229 1144110 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:35:18.571917 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/kubeconfig: {Name:mk6bda1f8356012618f11e41d531a3f786e443d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:18.572629 1144110 kapi.go:59] client config for ingress-addon-legacy-996779: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:35:18.573446 1144110 config.go:182] Loaded profile config "ingress-addon-legacy-996779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 00:35:18.573506 1144110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:35:18.573612 1144110 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 00:35:18.573677 1144110 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-996779"
	I1212 00:35:18.573691 1144110 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-996779"
	I1212 00:35:18.573735 1144110 host.go:66] Checking if "ingress-addon-legacy-996779" exists ...
	I1212 00:35:18.574195 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:35:18.575092 1144110 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 00:35:18.575465 1144110 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-996779"
	I1212 00:35:18.575488 1144110 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-996779"
	I1212 00:35:18.575783 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:35:18.650619 1144110 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:35:18.649460 1144110 kapi.go:59] client config for ingress-addon-legacy-996779: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:35:18.653300 1144110 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-996779"
	I1212 00:35:18.653343 1144110 host.go:66] Checking if "ingress-addon-legacy-996779" exists ...
	I1212 00:35:18.653828 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:35:18.654084 1144110 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:18.654101 1144110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:35:18.654143 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:35:18.698112 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:35:18.705461 1144110 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:18.705491 1144110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:35:18.705553 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:35:18.713547 1144110 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-996779" context rescaled to 1 replicas
	I1212 00:35:18.713592 1144110 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:18.717823 1144110 out.go:177] * Verifying Kubernetes components...
	I1212 00:35:18.720530 1144110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:18.731787 1144110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:35:18.741391 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:35:18.760663 1144110 kapi.go:59] client config for ingress-addon-legacy-996779: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:35:18.761029 1144110 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-996779" to be "Ready" ...
	I1212 00:35:18.917777 1144110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:18.991821 1144110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:19.202878 1144110 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 00:35:19.394617 1144110 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:35:19.396514 1144110 addons.go:502] enable addons completed in 822.912116ms: enabled=[storage-provisioner default-storageclass]
	I1212 00:35:20.782650 1144110 node_ready.go:58] node "ingress-addon-legacy-996779" has status "Ready":"False"
	I1212 00:35:23.279713 1144110 node_ready.go:58] node "ingress-addon-legacy-996779" has status "Ready":"False"
	I1212 00:35:25.779854 1144110 node_ready.go:58] node "ingress-addon-legacy-996779" has status "Ready":"False"
	I1212 00:35:27.279265 1144110 node_ready.go:49] node "ingress-addon-legacy-996779" has status "Ready":"True"
	I1212 00:35:27.279293 1144110 node_ready.go:38] duration metric: took 8.51822554s waiting for node "ingress-addon-legacy-996779" to be "Ready" ...
	I1212 00:35:27.279304 1144110 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:35:27.286688 1144110 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:29.294293 1144110 pod_ready.go:102] pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 00:35:18 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1212 00:35:31.294550 1144110 pod_ready.go:102] pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 00:35:18 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1212 00:35:33.296934 1144110 pod_ready.go:102] pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace has status "Ready":"False"
	I1212 00:35:35.796762 1144110 pod_ready.go:102] pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace has status "Ready":"False"
	I1212 00:35:37.797024 1144110 pod_ready.go:92] pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:37.797052 1144110 pod_ready.go:81] duration metric: took 10.510331682s waiting for pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.797064 1144110 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.801358 1144110 pod_ready.go:92] pod "etcd-ingress-addon-legacy-996779" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:37.801385 1144110 pod_ready.go:81] duration metric: took 4.312619ms waiting for pod "etcd-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.801399 1144110 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.805550 1144110 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-996779" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:37.805575 1144110 pod_ready.go:81] duration metric: took 4.168107ms waiting for pod "kube-apiserver-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.805586 1144110 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.809750 1144110 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-996779" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:37.809774 1144110 pod_ready.go:81] duration metric: took 4.180939ms waiting for pod "kube-controller-manager-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.809786 1144110 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d7hfm" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.814007 1144110 pod_ready.go:92] pod "kube-proxy-d7hfm" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:37.814034 1144110 pod_ready.go:81] duration metric: took 4.238488ms waiting for pod "kube-proxy-d7hfm" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.814044 1144110 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.993451 1144110 request.go:629] Waited for 179.314666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-996779
	I1212 00:35:38.193178 1144110 request.go:629] Waited for 197.318171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-996779
	I1212 00:35:38.195760 1144110 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-996779" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:38.195790 1144110 pod_ready.go:81] duration metric: took 381.734504ms waiting for pod "kube-scheduler-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:38.195803 1144110 pod_ready.go:38] duration metric: took 10.916482671s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:35:38.195817 1144110 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:35:38.195877 1144110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:35:38.208611 1144110 api_server.go:72] duration metric: took 19.494984541s to wait for apiserver process to appear ...
	I1212 00:35:38.208637 1144110 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:35:38.208653 1144110 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 00:35:38.217206 1144110 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 00:35:38.218085 1144110 api_server.go:141] control plane version: v1.18.20
	I1212 00:35:38.218111 1144110 api_server.go:131] duration metric: took 9.467298ms to wait for apiserver health ...
	I1212 00:35:38.218121 1144110 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:35:38.392450 1144110 request.go:629] Waited for 174.241372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:35:38.398136 1144110 system_pods.go:59] 8 kube-system pods found
	I1212 00:35:38.398171 1144110 system_pods.go:61] "coredns-66bff467f8-fdsk9" [f4a5ac98-fd88-41d5-a8f9-70a22dfca002] Running
	I1212 00:35:38.398178 1144110 system_pods.go:61] "etcd-ingress-addon-legacy-996779" [e2cc00a8-43c1-41b4-8c56-a7a8f0a8fde7] Running
	I1212 00:35:38.398183 1144110 system_pods.go:61] "kindnet-vtlkw" [cb7c6c14-13c5-46fe-be06-c0ee5259bfd9] Running
	I1212 00:35:38.398188 1144110 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-996779" [661c3601-3a0f-463d-a893-3d94c2ffb917] Running
	I1212 00:35:38.398227 1144110 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-996779" [e882d6c8-2fd0-4ba8-986b-b2c8c7251934] Running
	I1212 00:35:38.398240 1144110 system_pods.go:61] "kube-proxy-d7hfm" [d842c03e-6616-4f70-b70f-7c1e160858c9] Running
	I1212 00:35:38.398246 1144110 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-996779" [86dbb1c1-f575-4767-a090-f67ebd6fe628] Running
	I1212 00:35:38.398251 1144110 system_pods.go:61] "storage-provisioner" [9610964a-cbe8-4bdd-9b4f-f4438f39b894] Running
	I1212 00:35:38.398256 1144110 system_pods.go:74] duration metric: took 180.130118ms to wait for pod list to return data ...
	I1212 00:35:38.398267 1144110 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:35:38.592729 1144110 request.go:629] Waited for 194.367341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:35:38.595162 1144110 default_sa.go:45] found service account: "default"
	I1212 00:35:38.595194 1144110 default_sa.go:55] duration metric: took 196.919832ms for default service account to be created ...
	I1212 00:35:38.595204 1144110 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:35:38.792523 1144110 request.go:629] Waited for 197.259391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:35:38.798199 1144110 system_pods.go:86] 8 kube-system pods found
	I1212 00:35:38.798232 1144110 system_pods.go:89] "coredns-66bff467f8-fdsk9" [f4a5ac98-fd88-41d5-a8f9-70a22dfca002] Running
	I1212 00:35:38.798239 1144110 system_pods.go:89] "etcd-ingress-addon-legacy-996779" [e2cc00a8-43c1-41b4-8c56-a7a8f0a8fde7] Running
	I1212 00:35:38.798244 1144110 system_pods.go:89] "kindnet-vtlkw" [cb7c6c14-13c5-46fe-be06-c0ee5259bfd9] Running
	I1212 00:35:38.798250 1144110 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-996779" [661c3601-3a0f-463d-a893-3d94c2ffb917] Running
	I1212 00:35:38.798255 1144110 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-996779" [e882d6c8-2fd0-4ba8-986b-b2c8c7251934] Running
	I1212 00:35:38.798259 1144110 system_pods.go:89] "kube-proxy-d7hfm" [d842c03e-6616-4f70-b70f-7c1e160858c9] Running
	I1212 00:35:38.798264 1144110 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-996779" [86dbb1c1-f575-4767-a090-f67ebd6fe628] Running
	I1212 00:35:38.798269 1144110 system_pods.go:89] "storage-provisioner" [9610964a-cbe8-4bdd-9b4f-f4438f39b894] Running
	I1212 00:35:38.798275 1144110 system_pods.go:126] duration metric: took 203.067006ms to wait for k8s-apps to be running ...
	I1212 00:35:38.798282 1144110 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:35:38.798345 1144110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:38.812055 1144110 system_svc.go:56] duration metric: took 13.761801ms WaitForService to wait for kubelet.
	I1212 00:35:38.812079 1144110 kubeadm.go:581] duration metric: took 20.098460765s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:35:38.812097 1144110 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:35:38.992415 1144110 request.go:629] Waited for 180.240932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1212 00:35:38.995322 1144110 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:35:38.995358 1144110 node_conditions.go:123] node cpu capacity is 2
	I1212 00:35:38.995373 1144110 node_conditions.go:105] duration metric: took 183.268404ms to run NodePressure ...
	I1212 00:35:38.995405 1144110 start.go:228] waiting for startup goroutines ...
	I1212 00:35:38.995421 1144110 start.go:233] waiting for cluster config update ...
	I1212 00:35:38.995432 1144110 start.go:242] writing updated cluster config ...
	I1212 00:35:38.995732 1144110 ssh_runner.go:195] Run: rm -f paused
	I1212 00:35:39.059000 1144110 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1212 00:35:39.061545 1144110 out.go:177] 
	W1212 00:35:39.063428 1144110 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1212 00:35:39.065092 1144110 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1212 00:35:39.066870 1144110 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-996779" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 12 00:40:06 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:06.840421745Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2a319a59-b86f-477b-a2c1-164f91cc855d name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:15 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:15.847782107Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=00fe33db-f410-4f3f-827d-870db440ffb3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:15 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:15.848056158Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=00fe33db-f410-4f3f-827d-870db440ffb3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:27 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:27.847938064Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=f813f1c7-f61f-4991-95d9-b4ff2a632055 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:27 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:27.848222470Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=f813f1c7-f61f-4991-95d9-b4ff2a632055 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:32 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:32.847984143Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=428d8115-3f44-4558-9d3f-9c561d07cbb5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:32 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:32.848271314Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=428d8115-3f44-4558-9d3f-9c561d07cbb5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:42 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:42.847814541Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=a392cb15-a363-4e95-bda5-6cfdde232717 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:42 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:42.848083382Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=a392cb15-a363-4e95-bda5-6cfdde232717 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:47 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:47.847714648Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=43b01c3c-8c41-41b8-9f84-c05216c3ce94 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:47 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:47.847993926Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=43b01c3c-8c41-41b8-9f84-c05216c3ce94 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:57 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:57.847790687Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=58ef85e6-d54e-4d04-8112-4a8ca70ca95d name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:57 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:57.848087187Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=58ef85e6-d54e-4d04-8112-4a8ca70ca95d name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:58 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:58.847781630Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=71673c34-dacf-4e82-8c33-f9dade5c9d24 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:40:58 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:40:58.848051981Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=71673c34-dacf-4e82-8c33-f9dade5c9d24 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:41:12 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:12.847821884Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=a7414069-00af-4b72-9067-7547901e3a4e name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:41:12 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:12.848086639Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=a7414069-00af-4b72-9067-7547901e3a4e name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:41:12 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:12.848831446Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=d4081dd1-f2ca-4451-be0c-33f4e1d6a025 name=/runtime.v1alpha2.ImageService/PullImage
	Dec 12 00:41:12 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:12.850724608Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:41:13 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:13.847767470Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=6acfa1b7-5795-4955-a615-95a76d06e9ba name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:41:13 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:13.848043244Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=6acfa1b7-5795-4955-a615-95a76d06e9ba name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:41:25 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:25.847747887Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=85340fb5-56b8-4381-9bed-94d2c1874bb2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:41:25 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:25.848029225Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=85340fb5-56b8-4381-9bed-94d2c1874bb2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:41:36 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:36.848130139Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=430b0a01-d8fe-4e71-a556-0858dca1662c name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:41:36 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:36.848406520Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=430b0a01-d8fe-4e71-a556-0858dca1662c name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff8f0eb447271       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   6 minutes ago       Running             storage-provisioner       0                   61e2a4828b083       storage-provisioner
	3b77b68bac2b0       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  6 minutes ago       Running             coredns                   0                   78b029263a8b4       coredns-66bff467f8-fdsk9
	fe0391b007c1a       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                6 minutes ago       Running             kindnet-cni               0                   c990111658782       kindnet-vtlkw
	98591e814415a       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  6 minutes ago       Running             kube-proxy                0                   39d923c7d6040       kube-proxy-d7hfm
	fa5a904c833a9       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  6 minutes ago       Running             etcd                      0                   f5bc34c7eaaad       etcd-ingress-addon-legacy-996779
	37cc807eb8db0       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  6 minutes ago       Running             kube-apiserver            0                   f96b59e1d42bf       kube-apiserver-ingress-addon-legacy-996779
	ccc7574d027de       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  6 minutes ago       Running             kube-controller-manager   0                   ca79d23e526c6       kube-controller-manager-ingress-addon-legacy-996779
	b37841b0ca7e6       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  6 minutes ago       Running             kube-scheduler            0                   fb854e212adbf       kube-scheduler-ingress-addon-legacy-996779
	
	* 
	* ==> coredns [3b77b68bac2b04afc3e7d721d997ed77ddd55c75453a2536c06e9f802f3f8a01] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:40791 - 6159 "HINFO IN 4484289827737440315.2223997190913354484. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012332821s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-996779
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-996779
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=ingress-addon-legacy-996779
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T00_35_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:35:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-996779
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:41:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:40:37 +0000   Tue, 12 Dec 2023 00:34:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:40:37 +0000   Tue, 12 Dec 2023 00:34:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:40:37 +0000   Tue, 12 Dec 2023 00:34:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:40:37 +0000   Tue, 12 Dec 2023 00:35:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-996779
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 784c7fc04ccc40988beb20f93b3be49d
	  System UUID:                8949106a-73b7-4519-9c40-203ad5cc8066
	  Boot ID:                    1e71add7-2409-4eb4-97fc-c7110220f3c5
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-hmhpc                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  ingress-nginx               ingress-nginx-admission-patch-dj25d                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-nvvrd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         6m2s
	  kube-system                 coredns-66bff467f8-fdsk9                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m23s
	  kube-system                 etcd-ingress-addon-legacy-996779                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kindnet-vtlkw                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m23s
	  kube-system                 kube-apiserver-ingress-addon-legacy-996779             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-996779    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-proxy-d7hfm                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-scheduler-ingress-addon-legacy-996779             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 6m35s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m35s  kubelet     Node ingress-addon-legacy-996779 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s  kubelet     Node ingress-addon-legacy-996779 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s  kubelet     Node ingress-addon-legacy-996779 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m22s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                6m15s  kubelet     Node ingress-addon-legacy-996779 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001117] FS-Cache: O-key=[8] '12633b0000000000'
	[  +0.000754] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000973] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=0000000059a16183
	[  +0.001084] FS-Cache: N-key=[8] '12633b0000000000'
	[  +0.003102] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001029] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000006a4eadc9
	[  +0.001098] FS-Cache: O-key=[8] '12633b0000000000'
	[  +0.000729] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=00000000ef12e937
	[  +0.001096] FS-Cache: N-key=[8] '12633b0000000000'
	[  +1.721638] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001038] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000009ed47378
	[  +0.001181] FS-Cache: O-key=[8] '11633b0000000000'
	[  +0.000791] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000997] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=0000000059a16183
	[  +0.001129] FS-Cache: N-key=[8] '11633b0000000000'
	[  +0.334169] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000009942789b
	[  +0.001136] FS-Cache: O-key=[8] '17633b0000000000'
	[  +0.000746] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=000000006ac44817
	[  +0.001100] FS-Cache: N-key=[8] '17633b0000000000'
	
	* 
	* ==> etcd [fa5a904c833a9ec3d6a6ecb36751bb27ec22964245bbc48fc71c4c8ef086ed32] <==
	* raft2023/12/12 00:34:55 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-12 00:34:55.373658 W | auth: simple token is not cryptographically signed
	2023-12-12 00:34:55.377503 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-12 00:34:55.381569 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-12 00:34:55.382267 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-12-12 00:34:55.382613 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 00:34:55.382800 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-12 00:34:55.382959 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/12 00:34:55 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-12 00:34:55.989332 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-12 00:34:56.007496 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-12 00:34:56.017323 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-12 00:34:56.021293 I | etcdserver: published {Name:ingress-addon-legacy-996779 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-12 00:34:56.025256 I | embed: ready to serve client requests
	2023-12-12 00:34:56.050204 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 00:34:56.097302 I | embed: ready to serve client requests
	2023-12-12 00:34:56.098591 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-12 00:35:19.161272 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-66bff467f8-fdsk9.179fee68fa8f759b\" " with result "range_response_count:1 size:829" took too long (104.220542ms) to execute
	
	* 
	* ==> kernel <==
	*  00:41:41 up  7:24,  0 users,  load average: 0.22, 0.33, 0.49
	Linux ingress-addon-legacy-996779 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [fe0391b007c1a6fc35e858c2018dbca95ee2d82e45f004a50d9e9b5c92625d45] <==
	* I1212 00:39:32.201916       1 main.go:227] handling current node
	I1212 00:39:42.214159       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:39:42.214190       1 main.go:227] handling current node
	I1212 00:39:52.225491       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:39:52.225517       1 main.go:227] handling current node
	I1212 00:40:02.230638       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:40:02.230667       1 main.go:227] handling current node
	I1212 00:40:12.234352       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:40:12.234379       1 main.go:227] handling current node
	I1212 00:40:22.238088       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:40:22.238127       1 main.go:227] handling current node
	I1212 00:40:32.248232       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:40:32.248259       1 main.go:227] handling current node
	I1212 00:40:42.259567       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:40:42.259595       1 main.go:227] handling current node
	I1212 00:40:52.271623       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:40:52.271652       1 main.go:227] handling current node
	I1212 00:41:02.283279       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:41:02.283306       1 main.go:227] handling current node
	I1212 00:41:12.286312       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:41:12.286341       1 main.go:227] handling current node
	I1212 00:41:22.289410       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:41:22.289442       1 main.go:227] handling current node
	I1212 00:41:32.292270       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:41:32.292300       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [37cc807eb8db0b61a416564775bbeecb1cea6629f4a34a259723e681c4a15aca] <==
	* I1212 00:35:00.397753       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I1212 00:35:00.397798       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1212 00:35:00.421144       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1212 00:35:00.489214       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 00:35:00.489261       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:35:00.489552       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:35:00.494781       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1212 00:35:00.553497       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1212 00:35:01.316029       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1212 00:35:01.316060       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1212 00:35:01.322246       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1212 00:35:01.327161       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:35:01.327249       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1212 00:35:01.717899       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:35:01.764254       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1212 00:35:01.876878       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1212 00:35:01.877931       1 controller.go:609] quota admission added evaluator for: endpoints
	I1212 00:35:01.881403       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:35:02.703715       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1212 00:35:03.407308       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1212 00:35:03.481139       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1212 00:35:06.809511       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:35:18.773766       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1212 00:35:18.798826       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1212 00:35:39.926315       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [ccc7574d027de156202827c2d3c6f2f08c572f0da026556be7af1066e9f751ea] <==
	* I1212 00:35:18.826085       1 range_allocator.go:172] Starting range CIDR allocator
	I1212 00:35:18.826109       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
	I1212 00:35:18.826118       1 shared_informer.go:230] Caches are synced for cidrallocator 
	I1212 00:35:18.826688       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1212 00:35:18.826900       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1212 00:35:18.830757       1 shared_informer.go:230] Caches are synced for attach detach 
	I1212 00:35:18.834121       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 00:35:18.834138       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1212 00:35:18.848183       1 shared_informer.go:230] Caches are synced for taint 
	I1212 00:35:18.848357       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W1212 00:35:18.848427       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-996779. Assuming now as a timestamp.
	I1212 00:35:18.848361       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1212 00:35:18.848499       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1212 00:35:18.848677       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-996779", UID:"26233a9f-95d5-40fc-99f3-eccca62ff91f", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-996779 event: Registered Node ingress-addon-legacy-996779 in Controller
	I1212 00:35:18.864643       1 range_allocator.go:373] Set node ingress-addon-legacy-996779 PodCIDR to [10.244.0.0/24]
	I1212 00:35:18.876305       1 shared_informer.go:230] Caches are synced for TTL 
	I1212 00:35:18.877945       1 shared_informer.go:230] Caches are synced for GC 
	I1212 00:35:18.898523       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"8fcddf4e-f4b6-41f1-af1c-efe5cecc7987", APIVersion:"apps/v1", ResourceVersion:"206", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-d7hfm
	E1212 00:35:19.210955       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"8fcddf4e-f4b6-41f1-af1c-efe5cecc7987", ResourceVersion:"206", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63837938103, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000c4fa40), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000c4faa0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000c4fb00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000cad380), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4000c4fb60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000c4fbc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000c4fc80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000ddc0a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000da4b18), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004688c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f480)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000da4b68)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1212 00:35:28.848939       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1212 00:35:39.930873       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"8cba60db-69eb-4243-816b-3d3938781111", APIVersion:"apps/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1212 00:35:39.953112       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7d6da58c-4903-41aa-bd0e-14722a42faaa", APIVersion:"batch/v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-hmhpc
	I1212 00:35:39.953149       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"9b5ee3d1-70aa-480c-8190-31e5412fab72", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-nvvrd
	I1212 00:35:39.994760       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"4e3d2b75-3995-4956-919e-85385fe3f2fe", APIVersion:"batch/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-dj25d
	
	* 
	* ==> kube-proxy [98591e814415a7f68a501b413ac3dea0b90d3e1f3d46ecf22ae957d501b471d1] <==
	* W1212 00:35:19.598282       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1212 00:35:19.609791       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1212 00:35:19.609841       1 server_others.go:186] Using iptables Proxier.
	I1212 00:35:19.610201       1 server.go:583] Version: v1.18.20
	I1212 00:35:19.613145       1 config.go:315] Starting service config controller
	I1212 00:35:19.613361       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1212 00:35:19.613672       1 config.go:133] Starting endpoints config controller
	I1212 00:35:19.613710       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1212 00:35:19.713786       1 shared_informer.go:230] Caches are synced for service config 
	I1212 00:35:19.713877       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [b37841b0ca7e6c583e5f1b2bf62b18bba025f9ac412204bf622cf40da1944da1] <==
	* W1212 00:35:00.415172       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:35:00.468910       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 00:35:00.469020       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 00:35:00.477993       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1212 00:35:00.478280       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:35:00.478343       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:35:00.478398       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1212 00:35:00.491405       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 00:35:00.497566       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:35:00.497775       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 00:35:00.497963       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 00:35:00.498098       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 00:35:00.498224       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:35:00.498387       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:35:00.498509       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 00:35:00.498639       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:35:00.498790       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 00:35:00.513620       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:35:00.513906       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:35:01.372949       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:35:01.430369       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:35:01.726166       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 00:35:04.878544       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1212 00:35:19.081674       1 factory.go:503] pod kube-system/coredns-66bff467f8-fdsk9 is already present in the backoff queue
	E1212 00:35:19.393546       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Dec 12 00:39:22 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:39:22.848529    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:39:49 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:39:49.203244    1596 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 12 00:39:49 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:39:49.203310    1596 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 12 00:39:49 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:39:49.203502    1596 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 12 00:39:49 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:39:49.203542    1596 pod_workers.go:191] Error syncing pod 8ea9127d-bc0b-42d9-8db7-c8bd53c877d2 ("ingress-nginx-admission-patch-dj25d_ingress-nginx(8ea9127d-bc0b-42d9-8db7-c8bd53c877d2)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Dec 12 00:39:50 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:39:50.061301    1596 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
	Dec 12 00:39:50 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:39:50.061399    1596 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/2811188e-2c79-4901-9c6d-7d2f04f40e05-webhook-cert podName:2811188e-2c79-4901-9c6d-7d2f04f40e05 nodeName:}" failed. No retries permitted until 2023-12-12 00:41:52.061374829 +0000 UTC m=+408.711637653 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2811188e-2c79-4901-9c6d-7d2f04f40e05-webhook-cert\") pod \"ingress-nginx-controller-7fcf777cb7-nvvrd\" (UID: \"2811188e-2c79-4901-9c6d-7d2f04f40e05\") : secret \"ingress-nginx-admission\" not found"
	Dec 12 00:39:59 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:39:59.847886    1596 kubelet.go:1703] Unable to attach or mount volumes for pod "ingress-nginx-controller-7fcf777cb7-nvvrd_ingress-nginx(2811188e-2c79-4901-9c6d-7d2f04f40e05)": unmounted volumes=[webhook-cert], unattached volumes=[ingress-nginx-token-ksp2m webhook-cert]: timed out waiting for the condition; skipping pod
	Dec 12 00:39:59 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:39:59.847924    1596 pod_workers.go:191] Error syncing pod 2811188e-2c79-4901-9c6d-7d2f04f40e05 ("ingress-nginx-controller-7fcf777cb7-nvvrd_ingress-nginx(2811188e-2c79-4901-9c6d-7d2f04f40e05)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[ingress-nginx-token-ksp2m webhook-cert]: timed out waiting for the condition
	Dec 12 00:40:00 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:00.848457    1596 pod_workers.go:191] Error syncing pod 8ea9127d-bc0b-42d9-8db7-c8bd53c877d2 ("ingress-nginx-admission-patch-dj25d_ingress-nginx(8ea9127d-bc0b-42d9-8db7-c8bd53c877d2)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:40:06 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:06.926156    1596 container_manager_linux.go:512] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f, memory: /docker/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f/system.slice/kubelet.service
	Dec 12 00:40:15 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:15.848274    1596 pod_workers.go:191] Error syncing pod 8ea9127d-bc0b-42d9-8db7-c8bd53c877d2 ("ingress-nginx-admission-patch-dj25d_ingress-nginx(8ea9127d-bc0b-42d9-8db7-c8bd53c877d2)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:40:19 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:19.480433    1596 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 12 00:40:19 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:19.480496    1596 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 12 00:40:19 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:19.480561    1596 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 12 00:40:19 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:19.480595    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Dec 12 00:40:27 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:27.848455    1596 pod_workers.go:191] Error syncing pod 8ea9127d-bc0b-42d9-8db7-c8bd53c877d2 ("ingress-nginx-admission-patch-dj25d_ingress-nginx(8ea9127d-bc0b-42d9-8db7-c8bd53c877d2)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:40:32 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:32.849061    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:40:42 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:42.848427    1596 pod_workers.go:191] Error syncing pod 8ea9127d-bc0b-42d9-8db7-c8bd53c877d2 ("ingress-nginx-admission-patch-dj25d_ingress-nginx(8ea9127d-bc0b-42d9-8db7-c8bd53c877d2)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:40:47 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:47.848201    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:40:57 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:57.848317    1596 pod_workers.go:191] Error syncing pod 8ea9127d-bc0b-42d9-8db7-c8bd53c877d2 ("ingress-nginx-admission-patch-dj25d_ingress-nginx(8ea9127d-bc0b-42d9-8db7-c8bd53c877d2)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:40:58 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:40:58.848474    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:41:13 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:41:13.848283    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:41:25 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:41:25.848261    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:41:36 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:41:36.848748    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> storage-provisioner [ff8f0eb44727154b75577edcbe42bd079daebaf3ab30852d958fbb8e0f0324b7] <==
	* I1212 00:35:32.350555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:35:32.364194       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:35:32.364743       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:35:32.372504       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:35:32.372686       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-996779_79aa9244-d7c7-471b-a2f2-dad8f89fa9b9!
	I1212 00:35:32.373924       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c690da1-02fd-4178-92cd-d3dd8aac3e57", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-996779_79aa9244-d7c7-471b-a2f2-dad8f89fa9b9 became leader
	I1212 00:35:32.473810       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-996779_79aa9244-d7c7-471b-a2f2-dad8f89fa9b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-996779 -n ingress-addon-legacy-996779
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-996779 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-hmhpc ingress-nginx-admission-patch-dj25d ingress-nginx-controller-7fcf777cb7-nvvrd
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-996779 describe pod ingress-nginx-admission-create-hmhpc ingress-nginx-admission-patch-dj25d ingress-nginx-controller-7fcf777cb7-nvvrd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-996779 describe pod ingress-nginx-admission-create-hmhpc ingress-nginx-admission-patch-dj25d ingress-nginx-controller-7fcf777cb7-nvvrd: exit status 1 (85.570282ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hmhpc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dj25d" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-nvvrd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-996779 describe pod ingress-nginx-admission-create-hmhpc ingress-nginx-admission-patch-dj25d ingress-nginx-controller-7fcf777cb7-nvvrd: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (92.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-996779 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-996779 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (1m30.066214361s)

                                                
                                                
** stderr ** 
	error: timed out waiting for the condition on pods/ingress-nginx-controller-7fcf777cb7-nvvrd

                                                
                                                
** /stderr **
addons_test.go:207: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-996779
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-996779:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f",
	        "Created": "2023-12-12T00:34:28.478651304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1144572,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:34:28.797498141Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f/hosts",
	        "LogPath": "/var/lib/docker/containers/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f/33133c4cc7386b754bf07c14ecb0ad9cf226b60b3bcdd7868469d0efaba5278f-json.log",
	        "Name": "/ingress-addon-legacy-996779",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-996779:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-996779",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e1af87c87d6d78c2400b828243f7fb4c87923638674dcc5d52a4d7aa9185ab9-init/diff:/var/lib/docker/overlay2/c2a4fdcea722509eecd2151e38f63a7bf15f9db138183afe352dd4d4bae4600f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e1af87c87d6d78c2400b828243f7fb4c87923638674dcc5d52a4d7aa9185ab9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e1af87c87d6d78c2400b828243f7fb4c87923638674dcc5d52a4d7aa9185ab9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e1af87c87d6d78c2400b828243f7fb4c87923638674dcc5d52a4d7aa9185ab9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-996779",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-996779/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-996779",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-996779",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-996779",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a21320cf5ac119656128f0640ea803e37f5e213873309801c6b0850578ca9984",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34025"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34021"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34023"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34022"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a21320cf5ac1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-996779": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "33133c4cc738",
	                        "ingress-addon-legacy-996779"
	                    ],
	                    "NetworkID": "f93f4e79528b1f2d8a4fa7837ba29fe1e4897fa1f29fb970b286a7a56eb6350c",
	                    "EndpointID": "4054e004ff382f0fb204dc7c0d06d49a25951ed54983ad3b4df6e06afeaaa4df",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-996779 -n ingress-addon-legacy-996779
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-996779 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-996779 logs -n 25: (1.383427478s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-885247 image ls                                             | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	| image          | functional-885247 image load                                           | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-885247 image ls                                             | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	| image          | functional-885247 image save --daemon                                  | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-885247               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /etc/test/nested/copy/1117383/hosts                                    |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /etc/ssl/certs/1117383.pem                                             |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /usr/share/ca-certificates/1117383.pem                                 |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /etc/ssl/certs/11173832.pem                                            |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /usr/share/ca-certificates/11173832.pem                                |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh sudo cat                                         | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                             |         |         |                     |                     |
	| image          | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-885247 ssh pgrep                                            | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-885247 image build -t                                       | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | localhost/my-image:functional-885247                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-885247 image ls                                             | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	| image          | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-885247                                                      | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| delete         | -p functional-885247                                                   | functional-885247           | jenkins | v1.32.0 | 12 Dec 23 00:34 UTC | 12 Dec 23 00:34 UTC |
	| start          | -p ingress-addon-legacy-996779                                         | ingress-addon-legacy-996779 | jenkins | v1.32.0 | 12 Dec 23 00:34 UTC | 12 Dec 23 00:35 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-996779                                            | ingress-addon-legacy-996779 | jenkins | v1.32.0 | 12 Dec 23 00:35 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-996779                                            | ingress-addon-legacy-996779 | jenkins | v1.32.0 | 12 Dec 23 00:41 UTC | 12 Dec 23 00:41 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:34:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:34:10.862709 1144110 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:34:10.862919 1144110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:10.862931 1144110 out.go:309] Setting ErrFile to fd 2...
	I1212 00:34:10.862936 1144110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:34:10.863265 1144110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:34:10.863776 1144110 out.go:303] Setting JSON to false
	I1212 00:34:10.864722 1144110 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26197,"bootTime":1702315054,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:34:10.864803 1144110 start.go:138] virtualization:  
	I1212 00:34:10.867536 1144110 out.go:177] * [ingress-addon-legacy-996779] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:34:10.870316 1144110 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:34:10.872310 1144110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:34:10.870490 1144110 notify.go:220] Checking for updates...
	I1212 00:34:10.876400 1144110 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:34:10.878485 1144110 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:34:10.880825 1144110 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:34:10.883726 1144110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:34:10.886952 1144110 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:34:10.910961 1144110 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:34:10.911078 1144110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:11.015400 1144110 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-12 00:34:11.00571572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:34:11.015503 1144110 docker.go:295] overlay module found
	I1212 00:34:11.017862 1144110 out.go:177] * Using the docker driver based on user configuration
	I1212 00:34:11.020004 1144110 start.go:298] selected driver: docker
	I1212 00:34:11.020019 1144110 start.go:902] validating driver "docker" against <nil>
	I1212 00:34:11.020031 1144110 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:34:11.020653 1144110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:34:11.085552 1144110 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-12 00:34:11.076181789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:34:11.085713 1144110 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:34:11.085948 1144110 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:34:11.087954 1144110 out.go:177] * Using Docker driver with root privileges
	I1212 00:34:11.090349 1144110 cni.go:84] Creating CNI manager for ""
	I1212 00:34:11.090371 1144110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:11.090382 1144110 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:34:11.090398 1144110 start_flags.go:323] config:
	{Name:ingress-addon-legacy-996779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996779 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:34:11.092682 1144110 out.go:177] * Starting control plane node ingress-addon-legacy-996779 in cluster ingress-addon-legacy-996779
	I1212 00:34:11.094375 1144110 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 00:34:11.095954 1144110 out.go:177] * Pulling base image ...
	I1212 00:34:11.097650 1144110 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 00:34:11.097711 1144110 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:34:11.114872 1144110 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon, skipping pull
	I1212 00:34:11.114911 1144110 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in daemon, skipping load
	I1212 00:34:11.167645 1144110 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1212 00:34:11.167674 1144110 cache.go:56] Caching tarball of preloaded images
	I1212 00:34:11.167844 1144110 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 00:34:11.170152 1144110 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1212 00:34:11.171959 1144110 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:34:11.283342 1144110 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1212 00:34:20.638485 1144110 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:34:20.638613 1144110 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:34:21.830955 1144110 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1212 00:34:21.831362 1144110 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/config.json ...
	I1212 00:34:21.831394 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/config.json: {Name:mk9224e714fa93329a657ba7e5eaebf2850d6949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:21.831580 1144110 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:34:21.831638 1144110 start.go:365] acquiring machines lock for ingress-addon-legacy-996779: {Name:mk96b53ba9ba8ba029ff3fcdb15e7bcfc32e7d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:34:21.831697 1144110 start.go:369] acquired machines lock for "ingress-addon-legacy-996779" in 46.563µs
	I1212 00:34:21.831720 1144110 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-996779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996779 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:34:21.831796 1144110 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:34:21.834153 1144110 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 00:34:21.834378 1144110 start.go:159] libmachine.API.Create for "ingress-addon-legacy-996779" (driver="docker")
	I1212 00:34:21.834410 1144110 client.go:168] LocalClient.Create starting
	I1212 00:34:21.834476 1144110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem
	I1212 00:34:21.834508 1144110 main.go:141] libmachine: Decoding PEM data...
	I1212 00:34:21.834527 1144110 main.go:141] libmachine: Parsing certificate...
	I1212 00:34:21.834582 1144110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem
	I1212 00:34:21.834605 1144110 main.go:141] libmachine: Decoding PEM data...
	I1212 00:34:21.834620 1144110 main.go:141] libmachine: Parsing certificate...
	I1212 00:34:21.834962 1144110 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-996779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:34:21.851774 1144110 cli_runner.go:211] docker network inspect ingress-addon-legacy-996779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:34:21.851870 1144110 network_create.go:281] running [docker network inspect ingress-addon-legacy-996779] to gather additional debugging logs...
	I1212 00:34:21.851892 1144110 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-996779
	W1212 00:34:21.868744 1144110 cli_runner.go:211] docker network inspect ingress-addon-legacy-996779 returned with exit code 1
	I1212 00:34:21.868778 1144110 network_create.go:284] error running [docker network inspect ingress-addon-legacy-996779]: docker network inspect ingress-addon-legacy-996779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-996779 not found
	I1212 00:34:21.868792 1144110 network_create.go:286] output of [docker network inspect ingress-addon-legacy-996779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-996779 not found
	
	** /stderr **
	I1212 00:34:21.868929 1144110 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:34:21.886412 1144110 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40020f63f0}
	I1212 00:34:21.886446 1144110 network_create.go:124] attempt to create docker network ingress-addon-legacy-996779 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 00:34:21.886506 1144110 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-996779 ingress-addon-legacy-996779
	I1212 00:34:21.960112 1144110 network_create.go:108] docker network ingress-addon-legacy-996779 192.168.49.0/24 created
	I1212 00:34:21.960146 1144110 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-996779" container
	I1212 00:34:21.960217 1144110 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:34:21.976636 1144110 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-996779 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996779 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:34:21.994796 1144110 oci.go:103] Successfully created a docker volume ingress-addon-legacy-996779
	I1212 00:34:21.994882 1144110 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-996779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996779 --entrypoint /usr/bin/test -v ingress-addon-legacy-996779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib
	I1212 00:34:23.532043 1144110 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-996779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996779 --entrypoint /usr/bin/test -v ingress-addon-legacy-996779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib: (1.537116156s)
	I1212 00:34:23.532072 1144110 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-996779
	I1212 00:34:23.532090 1144110 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 00:34:23.532111 1144110 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:34:23.532199 1144110 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-996779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:34:28.399111 1144110 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-996779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir: (4.866861834s)
	I1212 00:34:28.399144 1144110 kic.go:203] duration metric: took 4.867031 seconds to extract preloaded images to volume
	W1212 00:34:28.399279 1144110 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:34:28.399389 1144110 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:34:28.462805 1144110 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-996779 --name ingress-addon-legacy-996779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-996779 --network ingress-addon-legacy-996779 --ip 192.168.49.2 --volume ingress-addon-legacy-996779:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 00:34:28.806711 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Running}}
	I1212 00:34:28.833695 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:34:28.859343 1144110 cli_runner.go:164] Run: docker exec ingress-addon-legacy-996779 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:34:28.922216 1144110 oci.go:144] the created container "ingress-addon-legacy-996779" has a running status.
	I1212 00:34:28.922246 1144110 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa...
	I1212 00:34:29.250069 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 00:34:29.250136 1144110 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:34:29.280403 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:34:29.316990 1144110 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:34:29.317009 1144110 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-996779 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:34:29.394379 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:34:29.438229 1144110 machine.go:88] provisioning docker machine ...
	I1212 00:34:29.438263 1144110 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-996779"
	I1212 00:34:29.438330 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:29.476317 1144110 main.go:141] libmachine: Using SSH client type: native
	I1212 00:34:29.476760 1144110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34025 <nil> <nil>}
	I1212 00:34:29.476781 1144110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-996779 && echo "ingress-addon-legacy-996779" | sudo tee /etc/hostname
	I1212 00:34:29.477426 1144110 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35748->127.0.0.1:34025: read: connection reset by peer
	I1212 00:34:32.632142 1144110 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-996779
	
	I1212 00:34:32.632224 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:32.651152 1144110 main.go:141] libmachine: Using SSH client type: native
	I1212 00:34:32.651565 1144110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34025 <nil> <nil>}
	I1212 00:34:32.651590 1144110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-996779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-996779/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-996779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:34:32.794510 1144110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:34:32.794539 1144110 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 00:34:32.794558 1144110 ubuntu.go:177] setting up certificates
	I1212 00:34:32.794567 1144110 provision.go:83] configureAuth start
	I1212 00:34:32.794627 1144110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996779
	I1212 00:34:32.812330 1144110 provision.go:138] copyHostCerts
	I1212 00:34:32.812369 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 00:34:32.812400 1144110 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem, removing ...
	I1212 00:34:32.812412 1144110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 00:34:32.812486 1144110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 00:34:32.812568 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 00:34:32.812591 1144110 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem, removing ...
	I1212 00:34:32.812600 1144110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 00:34:32.812629 1144110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 00:34:32.812680 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 00:34:32.812703 1144110 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem, removing ...
	I1212 00:34:32.812711 1144110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 00:34:32.812735 1144110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 00:34:32.812788 1144110 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-996779 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-996779]
	I1212 00:34:33.611671 1144110 provision.go:172] copyRemoteCerts
	I1212 00:34:33.611742 1144110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:34:33.611788 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:33.629657 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:34:33.731648 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:34:33.731718 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:34:33.759442 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:34:33.759505 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:34:33.787896 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:34:33.787959 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 00:34:33.815636 1144110 provision.go:86] duration metric: configureAuth took 1.021055131s
	I1212 00:34:33.815662 1144110 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:34:33.815855 1144110 config.go:182] Loaded profile config "ingress-addon-legacy-996779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 00:34:33.815960 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:33.835791 1144110 main.go:141] libmachine: Using SSH client type: native
	I1212 00:34:33.836217 1144110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34025 <nil> <nil>}
	I1212 00:34:33.836238 1144110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:34:34.113153 1144110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:34:34.113177 1144110 machine.go:91] provisioned docker machine in 4.674922886s
	I1212 00:34:34.113188 1144110 client.go:171] LocalClient.Create took 12.27877161s
	I1212 00:34:34.113205 1144110 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-996779" took 12.278827215s
	I1212 00:34:34.113213 1144110 start.go:300] post-start starting for "ingress-addon-legacy-996779" (driver="docker")
	I1212 00:34:34.113225 1144110 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:34:34.113317 1144110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:34:34.113366 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:34.131670 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:34:34.233803 1144110 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:34:34.237958 1144110 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:34:34.237992 1144110 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:34:34.238003 1144110 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:34:34.238011 1144110 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:34:34.238025 1144110 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 00:34:34.238091 1144110 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 00:34:34.238184 1144110 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> 11173832.pem in /etc/ssl/certs
	I1212 00:34:34.238196 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> /etc/ssl/certs/11173832.pem
	I1212 00:34:34.238306 1144110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:34:34.248661 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 00:34:34.277071 1144110 start.go:303] post-start completed in 163.841827ms
	I1212 00:34:34.277509 1144110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996779
	I1212 00:34:34.295580 1144110 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/config.json ...
	I1212 00:34:34.295862 1144110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:34:34.295903 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:34.313421 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:34:34.411302 1144110 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:34:34.416960 1144110 start.go:128] duration metric: createHost completed in 12.585148374s
	I1212 00:34:34.416987 1144110 start.go:83] releasing machines lock for "ingress-addon-legacy-996779", held for 12.585275485s
	I1212 00:34:34.417058 1144110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996779
	I1212 00:34:34.435971 1144110 ssh_runner.go:195] Run: cat /version.json
	I1212 00:34:34.436019 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:34.436274 1144110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:34:34.436329 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:34:34.456111 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:34:34.457020 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:34:34.550005 1144110 ssh_runner.go:195] Run: systemctl --version
	I1212 00:34:34.688090 1144110 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:34:34.837391 1144110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:34:34.842882 1144110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:34:34.868378 1144110 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:34:34.868467 1144110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:34:34.907656 1144110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 00:34:34.907677 1144110 start.go:475] detecting cgroup driver to use...
	I1212 00:34:34.907710 1144110 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:34:34.907759 1144110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:34:34.926115 1144110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:34:34.939372 1144110 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:34:34.939487 1144110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:34:34.955406 1144110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:34:34.972660 1144110 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:34:35.066963 1144110 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:34:35.166100 1144110 docker.go:219] disabling docker service ...
	I1212 00:34:35.166223 1144110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:34:35.188658 1144110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:34:35.203688 1144110 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:34:35.307141 1144110 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:34:35.410757 1144110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:34:35.423773 1144110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:34:35.443225 1144110 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 00:34:35.443317 1144110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:35.455180 1144110 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:34:35.455266 1144110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:35.467528 1144110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:35.479417 1144110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:34:35.491658 1144110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:34:35.502946 1144110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:34:35.512836 1144110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:34:35.522852 1144110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:34:35.617677 1144110 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:34:35.742892 1144110 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:34:35.743008 1144110 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:34:35.748320 1144110 start.go:543] Will wait 60s for crictl version
	I1212 00:34:35.748385 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:35.752685 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:34:35.798640 1144110 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 00:34:35.798731 1144110 ssh_runner.go:195] Run: crio --version
	I1212 00:34:35.846063 1144110 ssh_runner.go:195] Run: crio --version
	I1212 00:34:35.888813 1144110 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1212 00:34:35.890782 1144110 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-996779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:34:35.907867 1144110 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:34:35.912533 1144110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:34:35.925916 1144110 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 00:34:35.925990 1144110 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:34:35.975454 1144110 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 00:34:35.975532 1144110 ssh_runner.go:195] Run: which lz4
	I1212 00:34:35.979874 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1212 00:34:35.979968 1144110 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 00:34:35.984122 1144110 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 00:34:35.984156 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1212 00:34:38.096266 1144110 crio.go:444] Took 2.116328 seconds to copy over tarball
	I1212 00:34:38.096342 1144110 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 00:34:40.811421 1144110 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.715048176s)
	I1212 00:34:40.811452 1144110 crio.go:451] Took 2.715161 seconds to extract the tarball
	I1212 00:34:40.811463 1144110 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 00:34:40.909383 1144110 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:34:40.952498 1144110 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 00:34:40.952520 1144110 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 00:34:40.952585 1144110 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:40.952778 1144110 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:34:40.952850 1144110 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:34:40.952936 1144110 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:34:40.953020 1144110 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 00:34:40.953086 1144110 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1212 00:34:40.953145 1144110 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 00:34:40.953272 1144110 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1212 00:34:40.954174 1144110 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 00:34:40.954649 1144110 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:34:40.954913 1144110 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:34:40.955241 1144110 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:34:40.955294 1144110 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:40.955331 1144110 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1212 00:34:40.955373 1144110 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 00:34:40.955405 1144110 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	W1212 00:34:41.297348 1144110 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.297601 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1212 00:34:41.321563 1144110 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.321798 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1212 00:34:41.339354 1144110 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.339557 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:34:41.345652 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1212 00:34:41.347960 1144110 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.348128 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1212 00:34:41.357305 1144110 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.357516 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1212 00:34:41.379182 1144110 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1212 00:34:41.379254 1144110 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 00:34:41.379319 1144110 ssh_runner.go:195] Run: which crictl
	W1212 00:34:41.381699 1144110 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.381876 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:34:41.454010 1144110 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1212 00:34:41.454052 1144110 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1212 00:34:41.454104 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.532646 1144110 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1212 00:34:41.532690 1144110 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:34:41.532737 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.532821 1144110 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1212 00:34:41.532837 1144110 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 00:34:41.532857 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.532923 1144110 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1212 00:34:41.532941 1144110 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:34:41.532966 1144110 ssh_runner.go:195] Run: which crictl
	W1212 00:34:41.551135 1144110 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1212 00:34:41.551328 1144110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:41.578486 1144110 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1212 00:34:41.578527 1144110 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1212 00:34:41.578576 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.578658 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1212 00:34:41.578719 1144110 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1212 00:34:41.578733 1144110 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:34:41.578757 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.578807 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1212 00:34:41.578885 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1212 00:34:41.578925 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 00:34:41.578966 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:34:41.737312 1144110 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1212 00:34:41.737396 1144110 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:41.737482 1144110 ssh_runner.go:195] Run: which crictl
	I1212 00:34:41.741455 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1212 00:34:41.741533 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1212 00:34:41.741612 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1212 00:34:41.741647 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1212 00:34:41.741715 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1212 00:34:41.741770 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1212 00:34:41.741810 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1212 00:34:41.744612 1144110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:34:41.797939 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1212 00:34:41.804800 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1212 00:34:41.839872 1144110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 00:34:41.839942 1144110 cache_images.go:92] LoadImages completed in 887.409646ms
	W1212 00:34:41.840012 1144110 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1212 00:34:41.840085 1144110 ssh_runner.go:195] Run: crio config
	I1212 00:34:41.899848 1144110 cni.go:84] Creating CNI manager for ""
	I1212 00:34:41.899916 1144110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:34:41.899963 1144110 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:34:41.900005 1144110 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-996779 NodeName:ingress-addon-legacy-996779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 00:34:41.900229 1144110 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-996779"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:34:41.900321 1144110 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-996779 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996779 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 00:34:41.900421 1144110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1212 00:34:41.911241 1144110 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:34:41.911385 1144110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:34:41.921888 1144110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1212 00:34:41.943373 1144110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1212 00:34:41.964143 1144110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1212 00:34:41.984868 1144110 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:34:41.989386 1144110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:34:42.003745 1144110 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779 for IP: 192.168.49.2
	I1212 00:34:42.003783 1144110 certs.go:190] acquiring lock for shared ca certs: {Name:mk50788b4819ee46b65351495e43cdf246a6ddce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:42.004055 1144110 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key
	I1212 00:34:42.004131 1144110 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key
	I1212 00:34:42.004195 1144110 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key
	I1212 00:34:42.004208 1144110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt with IP's: []
	I1212 00:34:43.037408 1144110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt ...
	I1212 00:34:43.037441 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: {Name:mk6ac0e137aee842cbcd456a55f45a6647393aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.037640 1144110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key ...
	I1212 00:34:43.037655 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key: {Name:mk58861a542846cd448c54c06f1dc30fd1c29ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.037756 1144110 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key.dd3b5fb2
	I1212 00:34:43.037775 1144110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 00:34:43.318509 1144110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt.dd3b5fb2 ...
	I1212 00:34:43.318540 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt.dd3b5fb2: {Name:mk559eb62e656c481bd4787f0152c86b1ec62bb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.318728 1144110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key.dd3b5fb2 ...
	I1212 00:34:43.318743 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key.dd3b5fb2: {Name:mkaada065d27109ec7fc10d382956777a02ae880 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.318828 1144110 certs.go:337] copying /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt
	I1212 00:34:43.318907 1144110 certs.go:341] copying /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key
	I1212 00:34:43.318972 1144110 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.key
	I1212 00:34:43.318992 1144110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.crt with IP's: []
	I1212 00:34:43.709054 1144110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.crt ...
	I1212 00:34:43.709087 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.crt: {Name:mk91191acf61975e76fc6fae02d794df9458aeed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.709282 1144110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.key ...
	I1212 00:34:43.709304 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.key: {Name:mkfa5b06c0d332bfb588e2a8041b35a95d9b90c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:34:43.709393 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:34:43.709422 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:34:43.709433 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:34:43.709445 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:34:43.709462 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:34:43.709478 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:34:43.709490 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:34:43.709505 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:34:43.709556 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem (1338 bytes)
	W1212 00:34:43.709599 1144110 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383_empty.pem, impossibly tiny 0 bytes
	I1212 00:34:43.709613 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:34:43.709647 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:34:43.709679 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:34:43.709713 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem (1679 bytes)
	I1212 00:34:43.709763 1144110 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 00:34:43.709795 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> /usr/share/ca-certificates/11173832.pem
	I1212 00:34:43.709812 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:43.709826 1144110 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem -> /usr/share/ca-certificates/1117383.pem
	I1212 00:34:43.710402 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:34:43.741081 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:34:43.770533 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:34:43.799128 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:34:43.827279 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:34:43.855583 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:34:43.883278 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:34:43.911526 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:34:43.939579 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /usr/share/ca-certificates/11173832.pem (1708 bytes)
	I1212 00:34:43.967368 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:34:43.995334 1144110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem --> /usr/share/ca-certificates/1117383.pem (1338 bytes)
	I1212 00:34:44.025370 1144110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:34:44.047075 1144110 ssh_runner.go:195] Run: openssl version
	I1212 00:34:44.054375 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11173832.pem && ln -fs /usr/share/ca-certificates/11173832.pem /etc/ssl/certs/11173832.pem"
	I1212 00:34:44.066183 1144110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11173832.pem
	I1212 00:34:44.070984 1144110 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:25 /usr/share/ca-certificates/11173832.pem
	I1212 00:34:44.071051 1144110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11173832.pem
	I1212 00:34:44.079787 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11173832.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:34:44.091645 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:34:44.103081 1144110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:44.107795 1144110 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:44.107882 1144110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:34:44.116613 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:34:44.128610 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117383.pem && ln -fs /usr/share/ca-certificates/1117383.pem /etc/ssl/certs/1117383.pem"
	I1212 00:34:44.140724 1144110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117383.pem
	I1212 00:34:44.145370 1144110 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:25 /usr/share/ca-certificates/1117383.pem
	I1212 00:34:44.145435 1144110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117383.pem
	I1212 00:34:44.154006 1144110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117383.pem /etc/ssl/certs/51391683.0"
	I1212 00:34:44.165564 1144110 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:34:44.170083 1144110 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 00:34:44.170154 1144110 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-996779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996779 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:34:44.170252 1144110 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:34:44.170317 1144110 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:34:44.210428 1144110 cri.go:89] found id: ""
	I1212 00:34:44.210509 1144110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:34:44.221450 1144110 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:34:44.231886 1144110 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:34:44.232026 1144110 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:34:44.242539 1144110 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:34:44.242600 1144110 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:34:44.297748 1144110 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1212 00:34:44.298268 1144110 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 00:34:44.358100 1144110 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:34:44.358213 1144110 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1212 00:34:44.358274 1144110 kubeadm.go:322] OS: Linux
	I1212 00:34:44.358337 1144110 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 00:34:44.358409 1144110 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 00:34:44.358479 1144110 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 00:34:44.358559 1144110 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 00:34:44.358623 1144110 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 00:34:44.358702 1144110 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 00:34:44.449492 1144110 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:34:44.449701 1144110 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:34:44.449803 1144110 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:34:44.681330 1144110 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:34:44.682939 1144110 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:34:44.683189 1144110 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 00:34:44.793614 1144110 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:34:44.798353 1144110 out.go:204]   - Generating certificates and keys ...
	I1212 00:34:44.798479 1144110 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 00:34:44.798605 1144110 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 00:34:45.225195 1144110 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:34:45.746651 1144110 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:34:46.211611 1144110 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:34:46.498478 1144110 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 00:34:46.767332 1144110 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 00:34:46.767631 1144110 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-996779 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:34:46.886971 1144110 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 00:34:46.887317 1144110 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-996779 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:34:47.553043 1144110 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:34:47.774843 1144110 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:34:48.430223 1144110 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 00:34:48.430874 1144110 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:34:49.073708 1144110 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:34:49.203550 1144110 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:34:49.765906 1144110 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:34:49.997726 1144110 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:34:49.998441 1144110 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:34:50.005836 1144110 out.go:204]   - Booting up control plane ...
	I1212 00:34:50.005953 1144110 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:34:50.011906 1144110 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:34:50.013941 1144110 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:34:50.015483 1144110 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:34:50.018544 1144110 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:35:02.021668 1144110 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002655 seconds
	I1212 00:35:02.021785 1144110 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:35:02.034961 1144110 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:35:02.553706 1144110 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:35:02.553848 1144110 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-996779 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 00:35:03.061004 1144110 kubeadm.go:322] [bootstrap-token] Using token: z4ajwh.1qn32homr9mxalew
	I1212 00:35:03.063214 1144110 out.go:204]   - Configuring RBAC rules ...
	I1212 00:35:03.063351 1144110 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:35:03.073685 1144110 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:35:03.092866 1144110 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:35:03.097745 1144110 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:35:03.100867 1144110 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:35:03.108634 1144110 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:35:03.123434 1144110 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:35:03.430928 1144110 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 00:35:03.605721 1144110 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 00:35:03.607962 1144110 kubeadm.go:322] 
	I1212 00:35:03.608033 1144110 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 00:35:03.608062 1144110 kubeadm.go:322] 
	I1212 00:35:03.608140 1144110 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 00:35:03.608149 1144110 kubeadm.go:322] 
	I1212 00:35:03.608174 1144110 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 00:35:03.608230 1144110 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:35:03.608282 1144110 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:35:03.608290 1144110 kubeadm.go:322] 
	I1212 00:35:03.608340 1144110 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 00:35:03.608414 1144110 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:35:03.608497 1144110 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:35:03.608506 1144110 kubeadm.go:322] 
	I1212 00:35:03.608585 1144110 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:35:03.608665 1144110 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 00:35:03.608673 1144110 kubeadm.go:322] 
	I1212 00:35:03.608752 1144110 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z4ajwh.1qn32homr9mxalew \
	I1212 00:35:03.608865 1144110 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 \
	I1212 00:35:03.608891 1144110 kubeadm.go:322]     --control-plane 
	I1212 00:35:03.608899 1144110 kubeadm.go:322] 
	I1212 00:35:03.608999 1144110 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:35:03.609008 1144110 kubeadm.go:322] 
	I1212 00:35:03.609088 1144110 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z4ajwh.1qn32homr9mxalew \
	I1212 00:35:03.609190 1144110 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 
	I1212 00:35:03.610044 1144110 kubeadm.go:322] W1212 00:34:44.296984    1236 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1212 00:35:03.610262 1144110 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1212 00:35:03.610383 1144110 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:35:03.610508 1144110 kubeadm.go:322] W1212 00:34:50.012080    1236 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 00:35:03.610635 1144110 kubeadm.go:322] W1212 00:34:50.014067    1236 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 00:35:03.610652 1144110 cni.go:84] Creating CNI manager for ""
	I1212 00:35:03.610667 1144110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:03.613477 1144110 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:35:03.615823 1144110 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:35:03.625352 1144110 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1212 00:35:03.625371 1144110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:35:03.649557 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:35:04.139756 1144110 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:35:04.139846 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:04.139873 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4 minikube.k8s.io/name=ingress-addon-legacy-996779 minikube.k8s.io/updated_at=2023_12_12T00_35_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:04.284979 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:04.285047 1144110 ops.go:34] apiserver oom_adj: -16
	I1212 00:35:04.380524 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:04.972860 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:05.472513 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:05.973342 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:06.472489 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:06.973040 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:07.473370 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:07.972858 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:08.472400 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:08.972432 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:09.472461 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:09.972451 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:10.473271 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:10.973032 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:11.473040 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:11.973075 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:12.473119 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:12.972963 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:13.473386 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:13.973385 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:14.473392 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:14.972585 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:15.472917 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:15.972758 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:16.472457 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:16.973207 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:17.473191 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:17.972649 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:18.472943 1144110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:35:18.571094 1144110 kubeadm.go:1088] duration metric: took 14.431322985s to wait for elevateKubeSystemPrivileges.
	I1212 00:35:18.571126 1144110 kubeadm.go:406] StartCluster complete in 34.400979067s
	I1212 00:35:18.571143 1144110 settings.go:142] acquiring lock: {Name:mk4639df610f4394c6679c82a1803a108086063e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:18.571229 1144110 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:35:18.571917 1144110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/kubeconfig: {Name:mk6bda1f8356012618f11e41d531a3f786e443d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:35:18.572629 1144110 kapi.go:59] client config for ingress-addon-legacy-996779: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:35:18.573446 1144110 config.go:182] Loaded profile config "ingress-addon-legacy-996779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 00:35:18.573506 1144110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:35:18.573612 1144110 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 00:35:18.573677 1144110 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-996779"
	I1212 00:35:18.573691 1144110 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-996779"
	I1212 00:35:18.573735 1144110 host.go:66] Checking if "ingress-addon-legacy-996779" exists ...
	I1212 00:35:18.574195 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:35:18.575092 1144110 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 00:35:18.575465 1144110 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-996779"
	I1212 00:35:18.575488 1144110 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-996779"
	I1212 00:35:18.575783 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:35:18.650619 1144110 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:35:18.649460 1144110 kapi.go:59] client config for ingress-addon-legacy-996779: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:35:18.653300 1144110 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-996779"
	I1212 00:35:18.653343 1144110 host.go:66] Checking if "ingress-addon-legacy-996779" exists ...
	I1212 00:35:18.653828 1144110 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996779 --format={{.State.Status}}
	I1212 00:35:18.654084 1144110 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:18.654101 1144110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:35:18.654143 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:35:18.698112 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:35:18.705461 1144110 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:18.705491 1144110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:35:18.705553 1144110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996779
	I1212 00:35:18.713547 1144110 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-996779" context rescaled to 1 replicas
	I1212 00:35:18.713592 1144110 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:35:18.717823 1144110 out.go:177] * Verifying Kubernetes components...
	I1212 00:35:18.720530 1144110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:18.731787 1144110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:35:18.741391 1144110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/ingress-addon-legacy-996779/id_rsa Username:docker}
	I1212 00:35:18.760663 1144110 kapi.go:59] client config for ingress-addon-legacy-996779: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:35:18.761029 1144110 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-996779" to be "Ready" ...
	I1212 00:35:18.917777 1144110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:35:18.991821 1144110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:35:19.202878 1144110 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 00:35:19.394617 1144110 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:35:19.396514 1144110 addons.go:502] enable addons completed in 822.912116ms: enabled=[storage-provisioner default-storageclass]
	I1212 00:35:20.782650 1144110 node_ready.go:58] node "ingress-addon-legacy-996779" has status "Ready":"False"
	I1212 00:35:23.279713 1144110 node_ready.go:58] node "ingress-addon-legacy-996779" has status "Ready":"False"
	I1212 00:35:25.779854 1144110 node_ready.go:58] node "ingress-addon-legacy-996779" has status "Ready":"False"
	I1212 00:35:27.279265 1144110 node_ready.go:49] node "ingress-addon-legacy-996779" has status "Ready":"True"
	I1212 00:35:27.279293 1144110 node_ready.go:38] duration metric: took 8.51822554s waiting for node "ingress-addon-legacy-996779" to be "Ready" ...
	I1212 00:35:27.279304 1144110 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:35:27.286688 1144110 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:29.294293 1144110 pod_ready.go:102] pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 00:35:18 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1212 00:35:31.294550 1144110 pod_ready.go:102] pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 00:35:18 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1212 00:35:33.296934 1144110 pod_ready.go:102] pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace has status "Ready":"False"
	I1212 00:35:35.796762 1144110 pod_ready.go:102] pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace has status "Ready":"False"
	I1212 00:35:37.797024 1144110 pod_ready.go:92] pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:37.797052 1144110 pod_ready.go:81] duration metric: took 10.510331682s waiting for pod "coredns-66bff467f8-fdsk9" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.797064 1144110 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.801358 1144110 pod_ready.go:92] pod "etcd-ingress-addon-legacy-996779" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:37.801385 1144110 pod_ready.go:81] duration metric: took 4.312619ms waiting for pod "etcd-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.801399 1144110 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.805550 1144110 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-996779" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:37.805575 1144110 pod_ready.go:81] duration metric: took 4.168107ms waiting for pod "kube-apiserver-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.805586 1144110 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.809750 1144110 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-996779" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:37.809774 1144110 pod_ready.go:81] duration metric: took 4.180939ms waiting for pod "kube-controller-manager-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.809786 1144110 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d7hfm" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.814007 1144110 pod_ready.go:92] pod "kube-proxy-d7hfm" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:37.814034 1144110 pod_ready.go:81] duration metric: took 4.238488ms waiting for pod "kube-proxy-d7hfm" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.814044 1144110 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:37.993451 1144110 request.go:629] Waited for 179.314666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-996779
	I1212 00:35:38.193178 1144110 request.go:629] Waited for 197.318171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-996779
	I1212 00:35:38.195760 1144110 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-996779" in "kube-system" namespace has status "Ready":"True"
	I1212 00:35:38.195790 1144110 pod_ready.go:81] duration metric: took 381.734504ms waiting for pod "kube-scheduler-ingress-addon-legacy-996779" in "kube-system" namespace to be "Ready" ...
	I1212 00:35:38.195803 1144110 pod_ready.go:38] duration metric: took 10.916482671s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:35:38.195817 1144110 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:35:38.195877 1144110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:35:38.208611 1144110 api_server.go:72] duration metric: took 19.494984541s to wait for apiserver process to appear ...
	I1212 00:35:38.208637 1144110 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:35:38.208653 1144110 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 00:35:38.217206 1144110 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 00:35:38.218085 1144110 api_server.go:141] control plane version: v1.18.20
	I1212 00:35:38.218111 1144110 api_server.go:131] duration metric: took 9.467298ms to wait for apiserver health ...
	I1212 00:35:38.218121 1144110 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:35:38.392450 1144110 request.go:629] Waited for 174.241372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:35:38.398136 1144110 system_pods.go:59] 8 kube-system pods found
	I1212 00:35:38.398171 1144110 system_pods.go:61] "coredns-66bff467f8-fdsk9" [f4a5ac98-fd88-41d5-a8f9-70a22dfca002] Running
	I1212 00:35:38.398178 1144110 system_pods.go:61] "etcd-ingress-addon-legacy-996779" [e2cc00a8-43c1-41b4-8c56-a7a8f0a8fde7] Running
	I1212 00:35:38.398183 1144110 system_pods.go:61] "kindnet-vtlkw" [cb7c6c14-13c5-46fe-be06-c0ee5259bfd9] Running
	I1212 00:35:38.398188 1144110 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-996779" [661c3601-3a0f-463d-a893-3d94c2ffb917] Running
	I1212 00:35:38.398227 1144110 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-996779" [e882d6c8-2fd0-4ba8-986b-b2c8c7251934] Running
	I1212 00:35:38.398240 1144110 system_pods.go:61] "kube-proxy-d7hfm" [d842c03e-6616-4f70-b70f-7c1e160858c9] Running
	I1212 00:35:38.398246 1144110 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-996779" [86dbb1c1-f575-4767-a090-f67ebd6fe628] Running
	I1212 00:35:38.398251 1144110 system_pods.go:61] "storage-provisioner" [9610964a-cbe8-4bdd-9b4f-f4438f39b894] Running
	I1212 00:35:38.398256 1144110 system_pods.go:74] duration metric: took 180.130118ms to wait for pod list to return data ...
	I1212 00:35:38.398267 1144110 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:35:38.592729 1144110 request.go:629] Waited for 194.367341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:35:38.595162 1144110 default_sa.go:45] found service account: "default"
	I1212 00:35:38.595194 1144110 default_sa.go:55] duration metric: took 196.919832ms for default service account to be created ...
	I1212 00:35:38.595204 1144110 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:35:38.792523 1144110 request.go:629] Waited for 197.259391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:35:38.798199 1144110 system_pods.go:86] 8 kube-system pods found
	I1212 00:35:38.798232 1144110 system_pods.go:89] "coredns-66bff467f8-fdsk9" [f4a5ac98-fd88-41d5-a8f9-70a22dfca002] Running
	I1212 00:35:38.798239 1144110 system_pods.go:89] "etcd-ingress-addon-legacy-996779" [e2cc00a8-43c1-41b4-8c56-a7a8f0a8fde7] Running
	I1212 00:35:38.798244 1144110 system_pods.go:89] "kindnet-vtlkw" [cb7c6c14-13c5-46fe-be06-c0ee5259bfd9] Running
	I1212 00:35:38.798250 1144110 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-996779" [661c3601-3a0f-463d-a893-3d94c2ffb917] Running
	I1212 00:35:38.798255 1144110 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-996779" [e882d6c8-2fd0-4ba8-986b-b2c8c7251934] Running
	I1212 00:35:38.798259 1144110 system_pods.go:89] "kube-proxy-d7hfm" [d842c03e-6616-4f70-b70f-7c1e160858c9] Running
	I1212 00:35:38.798264 1144110 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-996779" [86dbb1c1-f575-4767-a090-f67ebd6fe628] Running
	I1212 00:35:38.798269 1144110 system_pods.go:89] "storage-provisioner" [9610964a-cbe8-4bdd-9b4f-f4438f39b894] Running
	I1212 00:35:38.798275 1144110 system_pods.go:126] duration metric: took 203.067006ms to wait for k8s-apps to be running ...
	I1212 00:35:38.798282 1144110 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:35:38.798345 1144110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:35:38.812055 1144110 system_svc.go:56] duration metric: took 13.761801ms WaitForService to wait for kubelet.
	I1212 00:35:38.812079 1144110 kubeadm.go:581] duration metric: took 20.098460765s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:35:38.812097 1144110 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:35:38.992415 1144110 request.go:629] Waited for 180.240932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1212 00:35:38.995322 1144110 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:35:38.995358 1144110 node_conditions.go:123] node cpu capacity is 2
	I1212 00:35:38.995373 1144110 node_conditions.go:105] duration metric: took 183.268404ms to run NodePressure ...
	I1212 00:35:38.995405 1144110 start.go:228] waiting for startup goroutines ...
	I1212 00:35:38.995421 1144110 start.go:233] waiting for cluster config update ...
	I1212 00:35:38.995432 1144110 start.go:242] writing updated cluster config ...
	I1212 00:35:38.995732 1144110 ssh_runner.go:195] Run: rm -f paused
	I1212 00:35:39.059000 1144110 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1212 00:35:39.061545 1144110 out.go:177] 
	W1212 00:35:39.063428 1144110 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1212 00:35:39.065092 1144110 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1212 00:35:39.066870 1144110 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-996779" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 12 00:41:54 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:54.848362551Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=ec8462e5-a452-4d29-a0d7-9a1f7fc630a8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:41:59 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:41:59.847953325Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=b85b66f5-078f-4e95-8578-f639576e0212 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:06 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:06.848123796Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=547ae57f-b9c6-443e-a575-8b2302ca27f0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:06 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:06.848402695Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=547ae57f-b9c6-443e-a575-8b2302ca27f0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:11 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:11.847821276Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=26f5ad32-e359-4646-b96f-66d7cf8cbaa1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:20 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:20.848028375Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=1cb4638e-33d5-48a8-b06f-aa0ea890745d name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:20 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:20.848297150Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=1cb4638e-33d5-48a8-b06f-aa0ea890745d name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:24 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:24.847850607Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=ca7fff29-fb98-485a-98d9-50053ba2cb8a name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:32 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:32.847836824Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=80e559fe-efe6-4fd7-a850-bad916860aca name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:32 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:32.848104869Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=80e559fe-efe6-4fd7-a850-bad916860aca name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:35 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:35.847744652Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=35fb4e2b-e8b9-4ed9-89c9-43cc1b7dbab3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:35 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:35.848028401Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=35fb4e2b-e8b9-4ed9-89c9-43cc1b7dbab3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:39 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:39.847863186Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=a4d41b1e-9f28-46b6-9378-d28bd98c3171 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:47 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:47.847782555Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=4c775ee6-465e-426e-b359-b3df282e53b5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:47 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:47.848059937Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=4c775ee6-465e-426e-b359-b3df282e53b5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:50 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:50.847712238Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=3f725a2c-7baa-4740-a7a1-106073987d3a name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:50 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:50.847983713Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=3f725a2c-7baa-4740-a7a1-106073987d3a name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:52 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:52.847707552Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=c9cf4308-293f-411c-96f6-d754f718bebe name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:59 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:59.847774470Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=edae4a55-9466-40cd-ba63-748c395a4c5b name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:42:59 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:42:59.848045453Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=edae4a55-9466-40cd-ba63-748c395a4c5b name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:43:02 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:43:02.847615338Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=673d4740-0b65-4732-9553-15d6a6b7d0ed name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:43:02 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:43:02.847882940Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=673d4740-0b65-4732-9553-15d6a6b7d0ed name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:43:04 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:43:04.847894784Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=8c4f7531-fd11-4b3a-93d2-c6c418fd6ce0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:43:13 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:43:13.847784191Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=88ffebd3-87b4-4ca4-b64c-ba4d6813ae97 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 00:43:13 ingress-addon-legacy-996779 crio[903]: time="2023-12-12 00:43:13.848053926Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=88ffebd3-87b4-4ca4-b64c-ba4d6813ae97 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff8f0eb447271       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   7 minutes ago       Running             storage-provisioner       0                   61e2a4828b083       storage-provisioner
	3b77b68bac2b0       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  7 minutes ago       Running             coredns                   0                   78b029263a8b4       coredns-66bff467f8-fdsk9
	fe0391b007c1a       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                7 minutes ago       Running             kindnet-cni               0                   c990111658782       kindnet-vtlkw
	98591e814415a       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  7 minutes ago       Running             kube-proxy                0                   39d923c7d6040       kube-proxy-d7hfm
	fa5a904c833a9       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  8 minutes ago       Running             etcd                      0                   f5bc34c7eaaad       etcd-ingress-addon-legacy-996779
	37cc807eb8db0       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  8 minutes ago       Running             kube-apiserver            0                   f96b59e1d42bf       kube-apiserver-ingress-addon-legacy-996779
	ccc7574d027de       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  8 minutes ago       Running             kube-controller-manager   0                   ca79d23e526c6       kube-controller-manager-ingress-addon-legacy-996779
	b37841b0ca7e6       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  8 minutes ago       Running             kube-scheduler            0                   fb854e212adbf       kube-scheduler-ingress-addon-legacy-996779
	
	* 
	* ==> coredns [3b77b68bac2b04afc3e7d721d997ed77ddd55c75453a2536c06e9f802f3f8a01] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:40791 - 6159 "HINFO IN 4484289827737440315.2223997190913354484. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012332821s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-996779
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-996779
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=ingress-addon-legacy-996779
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T00_35_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:35:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-996779
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:43:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:40:37 +0000   Tue, 12 Dec 2023 00:34:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:40:37 +0000   Tue, 12 Dec 2023 00:34:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:40:37 +0000   Tue, 12 Dec 2023 00:34:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:40:37 +0000   Tue, 12 Dec 2023 00:35:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-996779
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 784c7fc04ccc40988beb20f93b3be49d
	  System UUID:                8949106a-73b7-4519-9c40-203ad5cc8066
	  Boot ID:                    1e71add7-2409-4eb4-97fc-c7110220f3c5
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-hmhpc                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-admission-patch-dj25d                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-nvvrd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m35s
	  kube-system                 coredns-66bff467f8-fdsk9                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     7m56s
	  kube-system                 etcd-ingress-addon-legacy-996779                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 kindnet-vtlkw                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m56s
	  kube-system                 kube-apiserver-ingress-addon-legacy-996779             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-996779    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 kube-ingress-dns-minikube                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-d7hfm                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-scheduler-ingress-addon-legacy-996779             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 8m8s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m8s   kubelet     Node ingress-addon-legacy-996779 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m8s   kubelet     Node ingress-addon-legacy-996779 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m8s   kubelet     Node ingress-addon-legacy-996779 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m55s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m48s  kubelet     Node ingress-addon-legacy-996779 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001117] FS-Cache: O-key=[8] '12633b0000000000'
	[  +0.000754] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000973] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=0000000059a16183
	[  +0.001084] FS-Cache: N-key=[8] '12633b0000000000'
	[  +0.003102] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001029] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000006a4eadc9
	[  +0.001098] FS-Cache: O-key=[8] '12633b0000000000'
	[  +0.000729] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=00000000ef12e937
	[  +0.001096] FS-Cache: N-key=[8] '12633b0000000000'
	[  +1.721638] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001038] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000009ed47378
	[  +0.001181] FS-Cache: O-key=[8] '11633b0000000000'
	[  +0.000791] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000997] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=0000000059a16183
	[  +0.001129] FS-Cache: N-key=[8] '11633b0000000000'
	[  +0.334169] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000009942789b
	[  +0.001136] FS-Cache: O-key=[8] '17633b0000000000'
	[  +0.000746] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=000000006ac44817
	[  +0.001100] FS-Cache: N-key=[8] '17633b0000000000'
	
	* 
	* ==> etcd [fa5a904c833a9ec3d6a6ecb36751bb27ec22964245bbc48fc71c4c8ef086ed32] <==
	* raft2023/12/12 00:34:55 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-12 00:34:55.373658 W | auth: simple token is not cryptographically signed
	2023-12-12 00:34:55.377503 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-12 00:34:55.381569 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-12 00:34:55.382267 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-12-12 00:34:55.382613 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 00:34:55.382800 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-12 00:34:55.382959 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/12 00:34:55 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/12 00:34:55 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-12 00:34:55.989332 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-12 00:34:56.007496 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-12 00:34:56.017323 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-12 00:34:56.021293 I | etcdserver: published {Name:ingress-addon-legacy-996779 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-12 00:34:56.025256 I | embed: ready to serve client requests
	2023-12-12 00:34:56.050204 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 00:34:56.097302 I | embed: ready to serve client requests
	2023-12-12 00:34:56.098591 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-12 00:35:19.161272 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-66bff467f8-fdsk9.179fee68fa8f759b\" " with result "range_response_count:1 size:829" took too long (104.220542ms) to execute
	
	* 
	* ==> kernel <==
	*  00:43:14 up  7:25,  0 users,  load average: 0.11, 0.27, 0.45
	Linux ingress-addon-legacy-996779 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [fe0391b007c1a6fc35e858c2018dbca95ee2d82e45f004a50d9e9b5c92625d45] <==
	* I1212 00:41:12.286341       1 main.go:227] handling current node
	I1212 00:41:22.289410       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:41:22.289442       1 main.go:227] handling current node
	I1212 00:41:32.292270       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:41:32.292300       1 main.go:227] handling current node
	I1212 00:41:42.305907       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:41:42.306009       1 main.go:227] handling current node
	I1212 00:41:52.316790       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:41:52.316817       1 main.go:227] handling current node
	I1212 00:42:02.320021       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:42:02.320051       1 main.go:227] handling current node
	I1212 00:42:12.328207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:42:12.328233       1 main.go:227] handling current node
	I1212 00:42:22.332069       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:42:22.332099       1 main.go:227] handling current node
	I1212 00:42:32.340349       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:42:32.340378       1 main.go:227] handling current node
	I1212 00:42:42.349408       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:42:42.349437       1 main.go:227] handling current node
	I1212 00:42:52.352820       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:42:52.352853       1 main.go:227] handling current node
	I1212 00:43:02.364853       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:43:02.364886       1 main.go:227] handling current node
	I1212 00:43:12.367924       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:43:12.367955       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [37cc807eb8db0b61a416564775bbeecb1cea6629f4a34a259723e681c4a15aca] <==
	* I1212 00:35:00.397753       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I1212 00:35:00.397798       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1212 00:35:00.421144       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1212 00:35:00.489214       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 00:35:00.489261       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:35:00.489552       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:35:00.494781       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1212 00:35:00.553497       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1212 00:35:01.316029       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1212 00:35:01.316060       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1212 00:35:01.322246       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1212 00:35:01.327161       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:35:01.327249       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1212 00:35:01.717899       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:35:01.764254       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1212 00:35:01.876878       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1212 00:35:01.877931       1 controller.go:609] quota admission added evaluator for: endpoints
	I1212 00:35:01.881403       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:35:02.703715       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1212 00:35:03.407308       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1212 00:35:03.481139       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1212 00:35:06.809511       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:35:18.773766       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1212 00:35:18.798826       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1212 00:35:39.926315       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [ccc7574d027de156202827c2d3c6f2f08c572f0da026556be7af1066e9f751ea] <==
	* I1212 00:35:18.826085       1 range_allocator.go:172] Starting range CIDR allocator
	I1212 00:35:18.826109       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
	I1212 00:35:18.826118       1 shared_informer.go:230] Caches are synced for cidrallocator 
	I1212 00:35:18.826688       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1212 00:35:18.826900       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1212 00:35:18.830757       1 shared_informer.go:230] Caches are synced for attach detach 
	I1212 00:35:18.834121       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 00:35:18.834138       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1212 00:35:18.848183       1 shared_informer.go:230] Caches are synced for taint 
	I1212 00:35:18.848357       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W1212 00:35:18.848427       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-996779. Assuming now as a timestamp.
	I1212 00:35:18.848361       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1212 00:35:18.848499       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1212 00:35:18.848677       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-996779", UID:"26233a9f-95d5-40fc-99f3-eccca62ff91f", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-996779 event: Registered Node ingress-addon-legacy-996779 in Controller
	I1212 00:35:18.864643       1 range_allocator.go:373] Set node ingress-addon-legacy-996779 PodCIDR to [10.244.0.0/24]
	I1212 00:35:18.876305       1 shared_informer.go:230] Caches are synced for TTL 
	I1212 00:35:18.877945       1 shared_informer.go:230] Caches are synced for GC 
	I1212 00:35:18.898523       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"8fcddf4e-f4b6-41f1-af1c-efe5cecc7987", APIVersion:"apps/v1", ResourceVersion:"206", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-d7hfm
	E1212 00:35:19.210955       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"8fcddf4e-f4b6-41f1-af1c-efe5cecc7987", ResourceVersion:"206", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63837938103, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000c4fa40), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000c4faa0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000c4fb00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000cad380), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4000c4fb60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000c4fbc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000c4fc80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000ddc0a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000da4b18), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004688c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f480)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000da4b68)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1212 00:35:28.848939       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1212 00:35:39.930873       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"8cba60db-69eb-4243-816b-3d3938781111", APIVersion:"apps/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1212 00:35:39.953112       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7d6da58c-4903-41aa-bd0e-14722a42faaa", APIVersion:"batch/v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-hmhpc
	I1212 00:35:39.953149       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"9b5ee3d1-70aa-480c-8190-31e5412fab72", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-nvvrd
	I1212 00:35:39.994760       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"4e3d2b75-3995-4956-919e-85385fe3f2fe", APIVersion:"batch/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-dj25d
	
	* 
	* ==> kube-proxy [98591e814415a7f68a501b413ac3dea0b90d3e1f3d46ecf22ae957d501b471d1] <==
	* W1212 00:35:19.598282       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1212 00:35:19.609791       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1212 00:35:19.609841       1 server_others.go:186] Using iptables Proxier.
	I1212 00:35:19.610201       1 server.go:583] Version: v1.18.20
	I1212 00:35:19.613145       1 config.go:315] Starting service config controller
	I1212 00:35:19.613361       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1212 00:35:19.613672       1 config.go:133] Starting endpoints config controller
	I1212 00:35:19.613710       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1212 00:35:19.713786       1 shared_informer.go:230] Caches are synced for service config 
	I1212 00:35:19.713877       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [b37841b0ca7e6c583e5f1b2bf62b18bba025f9ac412204bf622cf40da1944da1] <==
	* W1212 00:35:00.415172       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:35:00.468910       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 00:35:00.469020       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 00:35:00.477993       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1212 00:35:00.478280       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:35:00.478343       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:35:00.478398       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1212 00:35:00.491405       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 00:35:00.497566       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:35:00.497775       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 00:35:00.497963       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 00:35:00.498098       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 00:35:00.498224       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:35:00.498387       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:35:00.498509       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 00:35:00.498639       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:35:00.498790       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 00:35:00.513620       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:35:00.513906       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:35:01.372949       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:35:01.430369       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:35:01.726166       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 00:35:04.878544       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1212 00:35:19.081674       1 factory.go:503] pod kube-system/coredns-66bff467f8-fdsk9 is already present in the backoff queue
	E1212 00:35:19.393546       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Dec 12 00:42:21 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:21.131173    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Dec 12 00:42:24 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:24.848203    1596 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:42:24 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:24.848236    1596 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:42:24 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:24.848276    1596 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:42:24 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:24.848305    1596 pod_workers.go:191] Error syncing pod 9aeff6b6-5e16-47c5-bb2b-27552c101dd5 ("kube-ingress-dns-minikube_kube-system(9aeff6b6-5e16-47c5-bb2b-27552c101dd5)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 12 00:42:32 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:32.848765    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:42:35 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:35.848271    1596 pod_workers.go:191] Error syncing pod 8ea9127d-bc0b-42d9-8db7-c8bd53c877d2 ("ingress-nginx-admission-patch-dj25d_ingress-nginx(8ea9127d-bc0b-42d9-8db7-c8bd53c877d2)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:42:39 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:39.848198    1596 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:42:39 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:39.848240    1596 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:42:39 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:39.848281    1596 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:42:39 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:39.848311    1596 pod_workers.go:191] Error syncing pod 9aeff6b6-5e16-47c5-bb2b-27552c101dd5 ("kube-ingress-dns-minikube_kube-system(9aeff6b6-5e16-47c5-bb2b-27552c101dd5)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 12 00:42:47 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:47.848467    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:42:50 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:50.848397    1596 pod_workers.go:191] Error syncing pod 8ea9127d-bc0b-42d9-8db7-c8bd53c877d2 ("ingress-nginx-admission-patch-dj25d_ingress-nginx(8ea9127d-bc0b-42d9-8db7-c8bd53c877d2)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:42:52 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:52.848145    1596 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:42:52 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:52.848178    1596 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:42:52 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:52.848219    1596 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:42:52 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:52.848247    1596 pod_workers.go:191] Error syncing pod 9aeff6b6-5e16-47c5-bb2b-27552c101dd5 ("kube-ingress-dns-minikube_kube-system(9aeff6b6-5e16-47c5-bb2b-27552c101dd5)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 12 00:42:59 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:42:59.848261    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:43:02 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:43:02.848095    1596 pod_workers.go:191] Error syncing pod 8ea9127d-bc0b-42d9-8db7-c8bd53c877d2 ("ingress-nginx-admission-patch-dj25d_ingress-nginx(8ea9127d-bc0b-42d9-8db7-c8bd53c877d2)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:43:04 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:43:04.848572    1596 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:43:04 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:43:04.848625    1596 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:43:04 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:43:04.848678    1596 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 00:43:04 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:43:04.848719    1596 pod_workers.go:191] Error syncing pod 9aeff6b6-5e16-47c5-bb2b-27552c101dd5 ("kube-ingress-dns-minikube_kube-system(9aeff6b6-5e16-47c5-bb2b-27552c101dd5)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 12 00:43:13 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:43:13.848391    1596 pod_workers.go:191] Error syncing pod fcffcc0f-d87e-4945-8a5f-70d419061f11 ("ingress-nginx-admission-create-hmhpc_ingress-nginx(fcffcc0f-d87e-4945-8a5f-70d419061f11)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 12 00:43:14 ingress-addon-legacy-996779 kubelet[1596]: E1212 00:43:14.848663    1596 pod_workers.go:191] Error syncing pod 8ea9127d-bc0b-42d9-8db7-c8bd53c877d2 ("ingress-nginx-admission-patch-dj25d_ingress-nginx(8ea9127d-bc0b-42d9-8db7-c8bd53c877d2)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> storage-provisioner [ff8f0eb44727154b75577edcbe42bd079daebaf3ab30852d958fbb8e0f0324b7] <==
	* I1212 00:35:32.350555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 00:35:32.364194       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 00:35:32.364743       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 00:35:32.372504       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 00:35:32.372686       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-996779_79aa9244-d7c7-471b-a2f2-dad8f89fa9b9!
	I1212 00:35:32.373924       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c690da1-02fd-4178-92cd-d3dd8aac3e57", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-996779_79aa9244-d7c7-471b-a2f2-dad8f89fa9b9 became leader
	I1212 00:35:32.473810       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-996779_79aa9244-d7c7-471b-a2f2-dad8f89fa9b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-996779 -n ingress-addon-legacy-996779
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-996779 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-hmhpc ingress-nginx-admission-patch-dj25d ingress-nginx-controller-7fcf777cb7-nvvrd kube-ingress-dns-minikube
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-996779 describe pod ingress-nginx-admission-create-hmhpc ingress-nginx-admission-patch-dj25d ingress-nginx-controller-7fcf777cb7-nvvrd kube-ingress-dns-minikube
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-996779 describe pod ingress-nginx-admission-create-hmhpc ingress-nginx-admission-patch-dj25d ingress-nginx-controller-7fcf777cb7-nvvrd kube-ingress-dns-minikube: exit status 1 (83.37543ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hmhpc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dj25d" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-nvvrd" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-996779 describe pod ingress-nginx-admission-create-hmhpc ingress-nginx-admission-patch-dj25d ingress-nginx-controller-7fcf777cb7-nvvrd kube-ingress-dns-minikube: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (92.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-f7wq7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-f7wq7 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-f7wq7 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (234.078819ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-f7wq7): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-tqh9c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-tqh9c -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-tqh9c -- sh -c "ping -c 1 192.168.58.1": exit status 1 (222.650103ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-tqh9c): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-270339
helpers_test.go:235: (dbg) docker inspect multinode-270339:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8cbfcb2f926f2933e9f6ac3a1ae628335b89b5892c0a645f94e42abd1790dda6",
	        "Created": "2023-12-12T00:49:47.781437847Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1179722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T00:49:48.103259414Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5372d9a9dbba152548ea1c7dddaca1a9a8c998722f22aaa148c1ee00bf6473be",
	        "ResolvConfPath": "/var/lib/docker/containers/8cbfcb2f926f2933e9f6ac3a1ae628335b89b5892c0a645f94e42abd1790dda6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8cbfcb2f926f2933e9f6ac3a1ae628335b89b5892c0a645f94e42abd1790dda6/hostname",
	        "HostsPath": "/var/lib/docker/containers/8cbfcb2f926f2933e9f6ac3a1ae628335b89b5892c0a645f94e42abd1790dda6/hosts",
	        "LogPath": "/var/lib/docker/containers/8cbfcb2f926f2933e9f6ac3a1ae628335b89b5892c0a645f94e42abd1790dda6/8cbfcb2f926f2933e9f6ac3a1ae628335b89b5892c0a645f94e42abd1790dda6-json.log",
	        "Name": "/multinode-270339",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-270339:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-270339",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/015e9bdd411aba5c1178499668f0cf72bd6583b5bc2d0a2fbcc4a3d40dbdb30e-init/diff:/var/lib/docker/overlay2/c2a4fdcea722509eecd2151e38f63a7bf15f9db138183afe352dd4d4bae4600f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/015e9bdd411aba5c1178499668f0cf72bd6583b5bc2d0a2fbcc4a3d40dbdb30e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/015e9bdd411aba5c1178499668f0cf72bd6583b5bc2d0a2fbcc4a3d40dbdb30e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/015e9bdd411aba5c1178499668f0cf72bd6583b5bc2d0a2fbcc4a3d40dbdb30e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-270339",
	                "Source": "/var/lib/docker/volumes/multinode-270339/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-270339",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-270339",
	                "name.minikube.sigs.k8s.io": "multinode-270339",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4773e95165d60e4d3627fed683eb5cba2d0a2884d8128fb4f696b48a0da79a5a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34085"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34081"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34083"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34082"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4773e95165d6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-270339": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8cbfcb2f926f",
	                        "multinode-270339"
	                    ],
	                    "NetworkID": "0b6f78e5fcd530c3bb8a35bfbd305a5f5dbfd719d724c3ca1f85fa0fb7d9b120",
	                    "EndpointID": "6975c51a8d48f0f4a35b97d663f87107aa2f5c8a51a3bdf6d59138e3b82309d3",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-270339 -n multinode-270339
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-270339 logs -n 25: (1.480210998s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-678156                           | mount-start-2-678156 | jenkins | v1.32.0 | 12 Dec 23 00:49 UTC | 12 Dec 23 00:49 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-678156 ssh -- ls                    | mount-start-2-678156 | jenkins | v1.32.0 | 12 Dec 23 00:49 UTC | 12 Dec 23 00:49 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-676236                           | mount-start-1-676236 | jenkins | v1.32.0 | 12 Dec 23 00:49 UTC | 12 Dec 23 00:49 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-678156 ssh -- ls                    | mount-start-2-678156 | jenkins | v1.32.0 | 12 Dec 23 00:49 UTC | 12 Dec 23 00:49 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-678156                           | mount-start-2-678156 | jenkins | v1.32.0 | 12 Dec 23 00:49 UTC | 12 Dec 23 00:49 UTC |
	| start   | -p mount-start-2-678156                           | mount-start-2-678156 | jenkins | v1.32.0 | 12 Dec 23 00:49 UTC | 12 Dec 23 00:49 UTC |
	| ssh     | mount-start-2-678156 ssh -- ls                    | mount-start-2-678156 | jenkins | v1.32.0 | 12 Dec 23 00:49 UTC | 12 Dec 23 00:49 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-678156                           | mount-start-2-678156 | jenkins | v1.32.0 | 12 Dec 23 00:49 UTC | 12 Dec 23 00:49 UTC |
	| delete  | -p mount-start-1-676236                           | mount-start-1-676236 | jenkins | v1.32.0 | 12 Dec 23 00:49 UTC | 12 Dec 23 00:49 UTC |
	| start   | -p multinode-270339                               | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:49 UTC | 12 Dec 23 00:51 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- apply -f                   | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- rollout                    | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- get pods -o                | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- get pods -o                | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- exec                       | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | busybox-5bc68d56bd-f7wq7 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- exec                       | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | busybox-5bc68d56bd-tqh9c --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- exec                       | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | busybox-5bc68d56bd-f7wq7 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- exec                       | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | busybox-5bc68d56bd-tqh9c --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- exec                       | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | busybox-5bc68d56bd-f7wq7 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- exec                       | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | busybox-5bc68d56bd-tqh9c -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- get pods -o                | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- exec                       | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | busybox-5bc68d56bd-f7wq7                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- exec                       | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC |                     |
	|         | busybox-5bc68d56bd-f7wq7 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- exec                       | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC | 12 Dec 23 00:51 UTC |
	|         | busybox-5bc68d56bd-tqh9c                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-270339 -- exec                       | multinode-270339     | jenkins | v1.32.0 | 12 Dec 23 00:51 UTC |                     |
	|         | busybox-5bc68d56bd-tqh9c -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:49:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:49:42.341326 1179266 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:49:42.341474 1179266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:49:42.341483 1179266 out.go:309] Setting ErrFile to fd 2...
	I1212 00:49:42.341489 1179266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:49:42.341761 1179266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:49:42.342229 1179266 out.go:303] Setting JSON to false
	I1212 00:49:42.343096 1179266 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27129,"bootTime":1702315054,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:49:42.343172 1179266 start.go:138] virtualization:  
	I1212 00:49:42.345750 1179266 out.go:177] * [multinode-270339] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:49:42.348761 1179266 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:49:42.348837 1179266 notify.go:220] Checking for updates...
	I1212 00:49:42.351622 1179266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:49:42.353445 1179266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:49:42.355331 1179266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:49:42.357047 1179266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:49:42.358883 1179266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:49:42.360936 1179266 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:49:42.384872 1179266 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:49:42.384992 1179266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:49:42.468783 1179266 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-12 00:49:42.458458567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:49:42.468881 1179266 docker.go:295] overlay module found
	I1212 00:49:42.472391 1179266 out.go:177] * Using the docker driver based on user configuration
	I1212 00:49:42.474180 1179266 start.go:298] selected driver: docker
	I1212 00:49:42.474198 1179266 start.go:902] validating driver "docker" against <nil>
	I1212 00:49:42.474212 1179266 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:49:42.474918 1179266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:49:42.543579 1179266 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-12 00:49:42.534284376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:49:42.543736 1179266 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:49:42.543954 1179266 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:49:42.546112 1179266 out.go:177] * Using Docker driver with root privileges
	I1212 00:49:42.547824 1179266 cni.go:84] Creating CNI manager for ""
	I1212 00:49:42.547842 1179266 cni.go:136] 0 nodes found, recommending kindnet
	I1212 00:49:42.547852 1179266 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:49:42.547868 1179266 start_flags.go:323] config:
	{Name:multinode-270339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-270339 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:49:42.550030 1179266 out.go:177] * Starting control plane node multinode-270339 in cluster multinode-270339
	I1212 00:49:42.551851 1179266 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 00:49:42.553651 1179266 out.go:177] * Pulling base image ...
	I1212 00:49:42.555578 1179266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:49:42.555626 1179266 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1212 00:49:42.555653 1179266 cache.go:56] Caching tarball of preloaded images
	I1212 00:49:42.555674 1179266 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:49:42.555735 1179266 preload.go:174] Found /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:49:42.555746 1179266 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 00:49:42.556088 1179266 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/config.json ...
	I1212 00:49:42.556118 1179266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/config.json: {Name:mkfce589bf9a935267c851be715442e96ba8bb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:49:42.572918 1179266 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon, skipping pull
	I1212 00:49:42.572942 1179266 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in daemon, skipping load
	I1212 00:49:42.572979 1179266 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:49:42.573040 1179266 start.go:365] acquiring machines lock for multinode-270339: {Name:mkd3135db80032407ee3db0ad426071b8f97b52e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:49:42.573159 1179266 start.go:369] acquired machines lock for "multinode-270339" in 102.134µs
	I1212 00:49:42.573184 1179266 start.go:93] Provisioning new machine with config: &{Name:multinode-270339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-270339 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:49:42.573281 1179266 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:49:42.575650 1179266 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1212 00:49:42.575894 1179266 start.go:159] libmachine.API.Create for "multinode-270339" (driver="docker")
	I1212 00:49:42.575944 1179266 client.go:168] LocalClient.Create starting
	I1212 00:49:42.576007 1179266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem
	I1212 00:49:42.576048 1179266 main.go:141] libmachine: Decoding PEM data...
	I1212 00:49:42.576074 1179266 main.go:141] libmachine: Parsing certificate...
	I1212 00:49:42.576131 1179266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem
	I1212 00:49:42.576153 1179266 main.go:141] libmachine: Decoding PEM data...
	I1212 00:49:42.576166 1179266 main.go:141] libmachine: Parsing certificate...
	I1212 00:49:42.576506 1179266 cli_runner.go:164] Run: docker network inspect multinode-270339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:49:42.593297 1179266 cli_runner.go:211] docker network inspect multinode-270339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:49:42.593384 1179266 network_create.go:281] running [docker network inspect multinode-270339] to gather additional debugging logs...
	I1212 00:49:42.593403 1179266 cli_runner.go:164] Run: docker network inspect multinode-270339
	W1212 00:49:42.610207 1179266 cli_runner.go:211] docker network inspect multinode-270339 returned with exit code 1
	I1212 00:49:42.610239 1179266 network_create.go:284] error running [docker network inspect multinode-270339]: docker network inspect multinode-270339: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-270339 not found
	I1212 00:49:42.610253 1179266 network_create.go:286] output of [docker network inspect multinode-270339]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-270339 not found
	
	** /stderr **
	I1212 00:49:42.610354 1179266 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:49:42.627508 1179266 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fb49185403af IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:74:1f:5b:43} reservation:<nil>}
	I1212 00:49:42.627888 1179266 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024f48a0}
	I1212 00:49:42.627910 1179266 network_create.go:124] attempt to create docker network multinode-270339 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1212 00:49:42.627974 1179266 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-270339 multinode-270339
	I1212 00:49:42.694478 1179266 network_create.go:108] docker network multinode-270339 192.168.58.0/24 created
	I1212 00:49:42.694511 1179266 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-270339" container
	I1212 00:49:42.694597 1179266 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:49:42.710396 1179266 cli_runner.go:164] Run: docker volume create multinode-270339 --label name.minikube.sigs.k8s.io=multinode-270339 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:49:42.727922 1179266 oci.go:103] Successfully created a docker volume multinode-270339
	I1212 00:49:42.728014 1179266 cli_runner.go:164] Run: docker run --rm --name multinode-270339-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-270339 --entrypoint /usr/bin/test -v multinode-270339:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib
	I1212 00:49:43.322732 1179266 oci.go:107] Successfully prepared a docker volume multinode-270339
	I1212 00:49:43.322791 1179266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:49:43.322812 1179266 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:49:43.322895 1179266 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-270339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:49:47.696571 1179266 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-270339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir: (4.37363031s)
	I1212 00:49:47.696606 1179266 kic.go:203] duration metric: took 4.373792 seconds to extract preloaded images to volume
	W1212 00:49:47.696743 1179266 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:49:47.696861 1179266 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:49:47.763319 1179266 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-270339 --name multinode-270339 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-270339 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-270339 --network multinode-270339 --ip 192.168.58.2 --volume multinode-270339:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 00:49:48.111685 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339 --format={{.State.Running}}
	I1212 00:49:48.135807 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339 --format={{.State.Status}}
	I1212 00:49:48.171220 1179266 cli_runner.go:164] Run: docker exec multinode-270339 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:49:48.251845 1179266 oci.go:144] the created container "multinode-270339" has a running status.
	I1212 00:49:48.251874 1179266 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa...
	I1212 00:49:48.967930 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 00:49:48.967999 1179266 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:49:48.992837 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339 --format={{.State.Status}}
	I1212 00:49:49.015658 1179266 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:49:49.015678 1179266 kic_runner.go:114] Args: [docker exec --privileged multinode-270339 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:49:49.070730 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339 --format={{.State.Status}}
	I1212 00:49:49.092413 1179266 machine.go:88] provisioning docker machine ...
	I1212 00:49:49.092448 1179266 ubuntu.go:169] provisioning hostname "multinode-270339"
	I1212 00:49:49.092519 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:49:49.115839 1179266 main.go:141] libmachine: Using SSH client type: native
	I1212 00:49:49.116271 1179266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34085 <nil> <nil>}
	I1212 00:49:49.116294 1179266 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-270339 && echo "multinode-270339" | sudo tee /etc/hostname
	I1212 00:49:49.286396 1179266 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-270339
	
	I1212 00:49:49.286521 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:49:49.312144 1179266 main.go:141] libmachine: Using SSH client type: native
	I1212 00:49:49.312562 1179266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34085 <nil> <nil>}
	I1212 00:49:49.312586 1179266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-270339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-270339/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-270339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:49:49.466318 1179266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:49:49.466344 1179266 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 00:49:49.466374 1179266 ubuntu.go:177] setting up certificates
	I1212 00:49:49.466383 1179266 provision.go:83] configureAuth start
	I1212 00:49:49.466446 1179266 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-270339
	I1212 00:49:49.484871 1179266 provision.go:138] copyHostCerts
	I1212 00:49:49.484910 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 00:49:49.484947 1179266 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem, removing ...
	I1212 00:49:49.484956 1179266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 00:49:49.485035 1179266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 00:49:49.485113 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 00:49:49.485129 1179266 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem, removing ...
	I1212 00:49:49.485134 1179266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 00:49:49.485158 1179266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 00:49:49.485207 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 00:49:49.485222 1179266 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem, removing ...
	I1212 00:49:49.485226 1179266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 00:49:49.485324 1179266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 00:49:49.485387 1179266 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.multinode-270339 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-270339]
	I1212 00:49:50.534666 1179266 provision.go:172] copyRemoteCerts
	I1212 00:49:50.534733 1179266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:49:50.534782 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:49:50.556245 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34085 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa Username:docker}
	I1212 00:49:50.659764 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:49:50.659825 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:49:50.687032 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:49:50.687092 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 00:49:50.714477 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:49:50.714582 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:49:50.741539 1179266 provision.go:86] duration metric: configureAuth took 1.27514234s
	I1212 00:49:50.741603 1179266 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:49:50.741805 1179266 config.go:182] Loaded profile config "multinode-270339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:49:50.741913 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:49:50.760745 1179266 main.go:141] libmachine: Using SSH client type: native
	I1212 00:49:50.761161 1179266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34085 <nil> <nil>}
	I1212 00:49:50.761186 1179266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:49:51.010164 1179266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:49:51.010197 1179266 machine.go:91] provisioned docker machine in 1.917763s
	I1212 00:49:51.010208 1179266 client.go:171] LocalClient.Create took 8.434256237s
	I1212 00:49:51.010221 1179266 start.go:167] duration metric: libmachine.API.Create for "multinode-270339" took 8.434327652s
	I1212 00:49:51.010229 1179266 start.go:300] post-start starting for "multinode-270339" (driver="docker")
	I1212 00:49:51.010239 1179266 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:49:51.010307 1179266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:49:51.010367 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:49:51.032714 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34085 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa Username:docker}
	I1212 00:49:51.132232 1179266 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:49:51.136391 1179266 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1212 00:49:51.136410 1179266 command_runner.go:130] > NAME="Ubuntu"
	I1212 00:49:51.136431 1179266 command_runner.go:130] > VERSION_ID="22.04"
	I1212 00:49:51.136439 1179266 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1212 00:49:51.136445 1179266 command_runner.go:130] > VERSION_CODENAME=jammy
	I1212 00:49:51.136450 1179266 command_runner.go:130] > ID=ubuntu
	I1212 00:49:51.136455 1179266 command_runner.go:130] > ID_LIKE=debian
	I1212 00:49:51.136460 1179266 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1212 00:49:51.136466 1179266 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1212 00:49:51.136477 1179266 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1212 00:49:51.136485 1179266 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1212 00:49:51.136494 1179266 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1212 00:49:51.136551 1179266 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:49:51.136577 1179266 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:49:51.136593 1179266 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:49:51.136601 1179266 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:49:51.136617 1179266 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 00:49:51.136675 1179266 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 00:49:51.136767 1179266 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> 11173832.pem in /etc/ssl/certs
	I1212 00:49:51.136779 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> /etc/ssl/certs/11173832.pem
	I1212 00:49:51.136891 1179266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:49:51.147564 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 00:49:51.176331 1179266 start.go:303] post-start completed in 166.086783ms
	I1212 00:49:51.176693 1179266 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-270339
	I1212 00:49:51.194263 1179266 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/config.json ...
	I1212 00:49:51.194525 1179266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:49:51.194574 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:49:51.212071 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34085 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa Username:docker}
	I1212 00:49:51.306771 1179266 command_runner.go:130] > 12%!
	(MISSING)I1212 00:49:51.307262 1179266 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:49:51.312569 1179266 command_runner.go:130] > 172G
	I1212 00:49:51.313005 1179266 start.go:128] duration metric: createHost completed in 8.739704766s
	I1212 00:49:51.313025 1179266 start.go:83] releasing machines lock for "multinode-270339", held for 8.739857926s
	I1212 00:49:51.313102 1179266 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-270339
	I1212 00:49:51.330320 1179266 ssh_runner.go:195] Run: cat /version.json
	I1212 00:49:51.330338 1179266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:49:51.330377 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:49:51.330400 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:49:51.349927 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34085 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa Username:docker}
	I1212 00:49:51.351677 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34085 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa Username:docker}
	I1212 00:49:51.577269 1179266 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:49:51.580308 1179266 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "ab8bc8972509537d0a802e7a72a692f75c1e7595"}
	I1212 00:49:51.580445 1179266 ssh_runner.go:195] Run: systemctl --version
	I1212 00:49:51.585981 1179266 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1212 00:49:51.586039 1179266 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:49:51.586128 1179266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:49:51.732725 1179266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:49:51.737938 1179266 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1212 00:49:51.737966 1179266 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1212 00:49:51.737974 1179266 command_runner.go:130] > Device: 3ah/58d	Inode: 1568803     Links: 1
	I1212 00:49:51.737985 1179266 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:49:51.737993 1179266 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1212 00:49:51.737999 1179266 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1212 00:49:51.738012 1179266 command_runner.go:130] > Change: 2023-12-12 00:11:51.073557127 +0000
	I1212 00:49:51.738024 1179266 command_runner.go:130] >  Birth: 2023-12-12 00:11:51.073557127 +0000
	I1212 00:49:51.738301 1179266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:49:51.762835 1179266 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:49:51.762924 1179266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:49:51.802335 1179266 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1212 00:49:51.802384 1179266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 00:49:51.802393 1179266 start.go:475] detecting cgroup driver to use...
	I1212 00:49:51.802425 1179266 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:49:51.802486 1179266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:49:51.820542 1179266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:49:51.834262 1179266 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:49:51.834325 1179266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:49:51.850075 1179266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:49:51.866609 1179266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:49:51.964637 1179266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:49:52.073129 1179266 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 00:49:52.073161 1179266 docker.go:219] disabling docker service ...
	I1212 00:49:52.073236 1179266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:49:52.095452 1179266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:49:52.109448 1179266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:49:52.209548 1179266 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 00:49:52.209656 1179266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:49:52.222780 1179266 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 00:49:52.310004 1179266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:49:52.323594 1179266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:49:52.341654 1179266 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 00:49:52.343318 1179266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 00:49:52.344408 1179266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:49:52.356387 1179266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:49:52.356478 1179266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:49:52.368119 1179266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:49:52.379554 1179266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:49:52.391159 1179266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:49:52.401811 1179266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:49:52.410802 1179266 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:49:52.411987 1179266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:49:52.421915 1179266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:49:52.523056 1179266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:49:52.653084 1179266 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:49:52.653196 1179266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:49:52.658009 1179266 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 00:49:52.658034 1179266 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:49:52.658070 1179266 command_runner.go:130] > Device: 44h/68d	Inode: 190         Links: 1
	I1212 00:49:52.658098 1179266 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:49:52.658114 1179266 command_runner.go:130] > Access: 2023-12-12 00:49:52.637325377 +0000
	I1212 00:49:52.658124 1179266 command_runner.go:130] > Modify: 2023-12-12 00:49:52.637325377 +0000
	I1212 00:49:52.658134 1179266 command_runner.go:130] > Change: 2023-12-12 00:49:52.637325377 +0000
	I1212 00:49:52.658164 1179266 command_runner.go:130] >  Birth: -
	I1212 00:49:52.658206 1179266 start.go:543] Will wait 60s for crictl version
	I1212 00:49:52.658282 1179266 ssh_runner.go:195] Run: which crictl
	I1212 00:49:52.662791 1179266 command_runner.go:130] > /usr/bin/crictl
	I1212 00:49:52.662861 1179266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:49:52.706006 1179266 command_runner.go:130] > Version:  0.1.0
	I1212 00:49:52.706165 1179266 command_runner.go:130] > RuntimeName:  cri-o
	I1212 00:49:52.706185 1179266 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1212 00:49:52.706192 1179266 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:49:52.708602 1179266 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 00:49:52.708682 1179266 ssh_runner.go:195] Run: crio --version
	I1212 00:49:52.750404 1179266 command_runner.go:130] > crio version 1.24.6
	I1212 00:49:52.750478 1179266 command_runner.go:130] > Version:          1.24.6
	I1212 00:49:52.750500 1179266 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1212 00:49:52.750527 1179266 command_runner.go:130] > GitTreeState:     clean
	I1212 00:49:52.750567 1179266 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1212 00:49:52.750589 1179266 command_runner.go:130] > GoVersion:        go1.18.2
	I1212 00:49:52.750607 1179266 command_runner.go:130] > Compiler:         gc
	I1212 00:49:52.750640 1179266 command_runner.go:130] > Platform:         linux/arm64
	I1212 00:49:52.750667 1179266 command_runner.go:130] > Linkmode:         dynamic
	I1212 00:49:52.750690 1179266 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 00:49:52.750721 1179266 command_runner.go:130] > SeccompEnabled:   true
	I1212 00:49:52.750742 1179266 command_runner.go:130] > AppArmorEnabled:  false
	I1212 00:49:52.752757 1179266 ssh_runner.go:195] Run: crio --version
	I1212 00:49:52.798708 1179266 command_runner.go:130] > crio version 1.24.6
	I1212 00:49:52.798777 1179266 command_runner.go:130] > Version:          1.24.6
	I1212 00:49:52.798810 1179266 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1212 00:49:52.798830 1179266 command_runner.go:130] > GitTreeState:     clean
	I1212 00:49:52.798869 1179266 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1212 00:49:52.798886 1179266 command_runner.go:130] > GoVersion:        go1.18.2
	I1212 00:49:52.798904 1179266 command_runner.go:130] > Compiler:         gc
	I1212 00:49:52.798937 1179266 command_runner.go:130] > Platform:         linux/arm64
	I1212 00:49:52.798960 1179266 command_runner.go:130] > Linkmode:         dynamic
	I1212 00:49:52.798988 1179266 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 00:49:52.799018 1179266 command_runner.go:130] > SeccompEnabled:   true
	I1212 00:49:52.799039 1179266 command_runner.go:130] > AppArmorEnabled:  false
	I1212 00:49:52.803663 1179266 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1212 00:49:52.805445 1179266 cli_runner.go:164] Run: docker network inspect multinode-270339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:49:52.826825 1179266 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1212 00:49:52.831608 1179266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:49:52.844992 1179266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:49:52.845068 1179266 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:49:52.911759 1179266 command_runner.go:130] > {
	I1212 00:49:52.911782 1179266 command_runner.go:130] >   "images": [
	I1212 00:49:52.911787 1179266 command_runner.go:130] >     {
	I1212 00:49:52.911797 1179266 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1212 00:49:52.911803 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.911823 1179266 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1212 00:49:52.911828 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.911834 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.911854 1179266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1212 00:49:52.911868 1179266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1212 00:49:52.911873 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.911880 1179266 command_runner.go:130] >       "size": "60867618",
	I1212 00:49:52.911885 1179266 command_runner.go:130] >       "uid": null,
	I1212 00:49:52.911897 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.911907 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.911916 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.911921 1179266 command_runner.go:130] >     },
	I1212 00:49:52.911928 1179266 command_runner.go:130] >     {
	I1212 00:49:52.911937 1179266 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:49:52.911945 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.911952 1179266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:49:52.911960 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.911965 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.911978 1179266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:49:52.911988 1179266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:49:52.911996 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912004 1179266 command_runner.go:130] >       "size": "29037500",
	I1212 00:49:52.912013 1179266 command_runner.go:130] >       "uid": null,
	I1212 00:49:52.912018 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.912023 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.912031 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.912036 1179266 command_runner.go:130] >     },
	I1212 00:49:52.912045 1179266 command_runner.go:130] >     {
	I1212 00:49:52.912058 1179266 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1212 00:49:52.912066 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.912074 1179266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1212 00:49:52.912082 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912088 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.912100 1179266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1212 00:49:52.912112 1179266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1212 00:49:52.912120 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912125 1179266 command_runner.go:130] >       "size": "51393451",
	I1212 00:49:52.912133 1179266 command_runner.go:130] >       "uid": null,
	I1212 00:49:52.912138 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.912147 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.912152 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.912160 1179266 command_runner.go:130] >     },
	I1212 00:49:52.912165 1179266 command_runner.go:130] >     {
	I1212 00:49:52.912175 1179266 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1212 00:49:52.912184 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.912193 1179266 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1212 00:49:52.912201 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912206 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.912218 1179266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1212 00:49:52.912230 1179266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1212 00:49:52.912244 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912254 1179266 command_runner.go:130] >       "size": "182203183",
	I1212 00:49:52.912259 1179266 command_runner.go:130] >       "uid": {
	I1212 00:49:52.912264 1179266 command_runner.go:130] >         "value": "0"
	I1212 00:49:52.912272 1179266 command_runner.go:130] >       },
	I1212 00:49:52.912277 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.912286 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.912291 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.912299 1179266 command_runner.go:130] >     },
	I1212 00:49:52.912303 1179266 command_runner.go:130] >     {
	I1212 00:49:52.912317 1179266 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1212 00:49:52.912325 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.912332 1179266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1212 00:49:52.912338 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912347 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.912356 1179266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1212 00:49:52.912369 1179266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1212 00:49:52.912376 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912382 1179266 command_runner.go:130] >       "size": "121119694",
	I1212 00:49:52.912390 1179266 command_runner.go:130] >       "uid": {
	I1212 00:49:52.912395 1179266 command_runner.go:130] >         "value": "0"
	I1212 00:49:52.912400 1179266 command_runner.go:130] >       },
	I1212 00:49:52.912408 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.912413 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.912422 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.912426 1179266 command_runner.go:130] >     },
	I1212 00:49:52.912434 1179266 command_runner.go:130] >     {
	I1212 00:49:52.912442 1179266 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1212 00:49:52.912450 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.912457 1179266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1212 00:49:52.912465 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912475 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.912488 1179266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1212 00:49:52.912501 1179266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1212 00:49:52.912509 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912515 1179266 command_runner.go:130] >       "size": "117252916",
	I1212 00:49:52.912523 1179266 command_runner.go:130] >       "uid": {
	I1212 00:49:52.912527 1179266 command_runner.go:130] >         "value": "0"
	I1212 00:49:52.912532 1179266 command_runner.go:130] >       },
	I1212 00:49:52.912540 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.912545 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.912553 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.912559 1179266 command_runner.go:130] >     },
	I1212 00:49:52.912567 1179266 command_runner.go:130] >     {
	I1212 00:49:52.912575 1179266 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1212 00:49:52.912580 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.912586 1179266 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1212 00:49:52.912593 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912599 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.912613 1179266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1212 00:49:52.912626 1179266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1212 00:49:52.912634 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912639 1179266 command_runner.go:130] >       "size": "69992343",
	I1212 00:49:52.912647 1179266 command_runner.go:130] >       "uid": null,
	I1212 00:49:52.912653 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.912660 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.912666 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.912673 1179266 command_runner.go:130] >     },
	I1212 00:49:52.912678 1179266 command_runner.go:130] >     {
	I1212 00:49:52.912689 1179266 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1212 00:49:52.912697 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.912703 1179266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1212 00:49:52.912711 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912717 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.912760 1179266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1212 00:49:52.912775 1179266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1212 00:49:52.912784 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912792 1179266 command_runner.go:130] >       "size": "59253556",
	I1212 00:49:52.912801 1179266 command_runner.go:130] >       "uid": {
	I1212 00:49:52.912806 1179266 command_runner.go:130] >         "value": "0"
	I1212 00:49:52.912814 1179266 command_runner.go:130] >       },
	I1212 00:49:52.912819 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.912827 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.912833 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.912838 1179266 command_runner.go:130] >     },
	I1212 00:49:52.912845 1179266 command_runner.go:130] >     {
	I1212 00:49:52.912854 1179266 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1212 00:49:52.912862 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.912868 1179266 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 00:49:52.912875 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912881 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.912890 1179266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1212 00:49:52.912902 1179266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1212 00:49:52.912910 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.912915 1179266 command_runner.go:130] >       "size": "520014",
	I1212 00:49:52.912925 1179266 command_runner.go:130] >       "uid": {
	I1212 00:49:52.912934 1179266 command_runner.go:130] >         "value": "65535"
	I1212 00:49:52.912939 1179266 command_runner.go:130] >       },
	I1212 00:49:52.912944 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.912953 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.912958 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.912966 1179266 command_runner.go:130] >     }
	I1212 00:49:52.912970 1179266 command_runner.go:130] >   ]
	I1212 00:49:52.912977 1179266 command_runner.go:130] > }
	I1212 00:49:52.915351 1179266 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 00:49:52.915373 1179266 crio.go:415] Images already preloaded, skipping extraction
	I1212 00:49:52.915430 1179266 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:49:52.962390 1179266 command_runner.go:130] > {
	I1212 00:49:52.962416 1179266 command_runner.go:130] >   "images": [
	I1212 00:49:52.962422 1179266 command_runner.go:130] >     {
	I1212 00:49:52.962437 1179266 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1212 00:49:52.962443 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.962451 1179266 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1212 00:49:52.962456 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.962461 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.962473 1179266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1212 00:49:52.962489 1179266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1212 00:49:52.962497 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.962502 1179266 command_runner.go:130] >       "size": "60867618",
	I1212 00:49:52.962507 1179266 command_runner.go:130] >       "uid": null,
	I1212 00:49:52.962513 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.962526 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.962531 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.962539 1179266 command_runner.go:130] >     },
	I1212 00:49:52.962544 1179266 command_runner.go:130] >     {
	I1212 00:49:52.962554 1179266 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:49:52.962563 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.962571 1179266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:49:52.962575 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.962581 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.962590 1179266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:49:52.962600 1179266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:49:52.962604 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.962612 1179266 command_runner.go:130] >       "size": "29037500",
	I1212 00:49:52.962618 1179266 command_runner.go:130] >       "uid": null,
	I1212 00:49:52.962623 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.962628 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.962633 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.962637 1179266 command_runner.go:130] >     },
	I1212 00:49:52.962642 1179266 command_runner.go:130] >     {
	I1212 00:49:52.962649 1179266 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1212 00:49:52.962654 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.962660 1179266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1212 00:49:52.962668 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.962673 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.962685 1179266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1212 00:49:52.962695 1179266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1212 00:49:52.962702 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.962708 1179266 command_runner.go:130] >       "size": "51393451",
	I1212 00:49:52.962713 1179266 command_runner.go:130] >       "uid": null,
	I1212 00:49:52.962720 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.962726 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.962733 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.962741 1179266 command_runner.go:130] >     },
	I1212 00:49:52.962745 1179266 command_runner.go:130] >     {
	I1212 00:49:52.962753 1179266 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1212 00:49:52.962761 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.962774 1179266 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1212 00:49:52.962778 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.962783 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.962795 1179266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1212 00:49:52.962804 1179266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1212 00:49:52.962817 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.962823 1179266 command_runner.go:130] >       "size": "182203183",
	I1212 00:49:52.962830 1179266 command_runner.go:130] >       "uid": {
	I1212 00:49:52.962835 1179266 command_runner.go:130] >         "value": "0"
	I1212 00:49:52.962839 1179266 command_runner.go:130] >       },
	I1212 00:49:52.962844 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.962849 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.962856 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.962869 1179266 command_runner.go:130] >     },
	I1212 00:49:52.962878 1179266 command_runner.go:130] >     {
	I1212 00:49:52.962886 1179266 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1212 00:49:52.962891 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.962897 1179266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1212 00:49:52.962902 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.962907 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.962916 1179266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1212 00:49:52.962928 1179266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1212 00:49:52.962932 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.962937 1179266 command_runner.go:130] >       "size": "121119694",
	I1212 00:49:52.962942 1179266 command_runner.go:130] >       "uid": {
	I1212 00:49:52.962949 1179266 command_runner.go:130] >         "value": "0"
	I1212 00:49:52.962954 1179266 command_runner.go:130] >       },
	I1212 00:49:52.962960 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.962967 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.962972 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.962976 1179266 command_runner.go:130] >     },
	I1212 00:49:52.962987 1179266 command_runner.go:130] >     {
	I1212 00:49:52.962999 1179266 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1212 00:49:52.963003 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.963017 1179266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1212 00:49:52.963021 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.963026 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.963036 1179266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1212 00:49:52.963049 1179266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1212 00:49:52.963054 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.963061 1179266 command_runner.go:130] >       "size": "117252916",
	I1212 00:49:52.963069 1179266 command_runner.go:130] >       "uid": {
	I1212 00:49:52.963074 1179266 command_runner.go:130] >         "value": "0"
	I1212 00:49:52.963079 1179266 command_runner.go:130] >       },
	I1212 00:49:52.963086 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.963091 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.963096 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.963103 1179266 command_runner.go:130] >     },
	I1212 00:49:52.963107 1179266 command_runner.go:130] >     {
	I1212 00:49:52.963116 1179266 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1212 00:49:52.963123 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.963130 1179266 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1212 00:49:52.963136 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.963141 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.963153 1179266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1212 00:49:52.963162 1179266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1212 00:49:52.963170 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.963175 1179266 command_runner.go:130] >       "size": "69992343",
	I1212 00:49:52.963180 1179266 command_runner.go:130] >       "uid": null,
	I1212 00:49:52.963187 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.963192 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.963197 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.963201 1179266 command_runner.go:130] >     },
	I1212 00:49:52.963207 1179266 command_runner.go:130] >     {
	I1212 00:49:52.963217 1179266 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1212 00:49:52.963225 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.963232 1179266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1212 00:49:52.963238 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.963244 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.963284 1179266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1212 00:49:52.963300 1179266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1212 00:49:52.963305 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.963310 1179266 command_runner.go:130] >       "size": "59253556",
	I1212 00:49:52.963320 1179266 command_runner.go:130] >       "uid": {
	I1212 00:49:52.963325 1179266 command_runner.go:130] >         "value": "0"
	I1212 00:49:52.963329 1179266 command_runner.go:130] >       },
	I1212 00:49:52.963336 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.963341 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.963346 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.963353 1179266 command_runner.go:130] >     },
	I1212 00:49:52.963357 1179266 command_runner.go:130] >     {
	I1212 00:49:52.963364 1179266 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1212 00:49:52.963371 1179266 command_runner.go:130] >       "repoTags": [
	I1212 00:49:52.963377 1179266 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 00:49:52.963384 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.963391 1179266 command_runner.go:130] >       "repoDigests": [
	I1212 00:49:52.963400 1179266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1212 00:49:52.963412 1179266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1212 00:49:52.963417 1179266 command_runner.go:130] >       ],
	I1212 00:49:52.963422 1179266 command_runner.go:130] >       "size": "520014",
	I1212 00:49:52.963429 1179266 command_runner.go:130] >       "uid": {
	I1212 00:49:52.963434 1179266 command_runner.go:130] >         "value": "65535"
	I1212 00:49:52.963438 1179266 command_runner.go:130] >       },
	I1212 00:49:52.963448 1179266 command_runner.go:130] >       "username": "",
	I1212 00:49:52.963453 1179266 command_runner.go:130] >       "spec": null,
	I1212 00:49:52.963458 1179266 command_runner.go:130] >       "pinned": false
	I1212 00:49:52.963464 1179266 command_runner.go:130] >     }
	I1212 00:49:52.963469 1179266 command_runner.go:130] >   ]
	I1212 00:49:52.963475 1179266 command_runner.go:130] > }
	I1212 00:49:52.966705 1179266 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 00:49:52.966724 1179266 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:49:52.966818 1179266 ssh_runner.go:195] Run: crio config
	I1212 00:49:53.023415 1179266 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 00:49:53.023444 1179266 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 00:49:53.023453 1179266 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 00:49:53.023457 1179266 command_runner.go:130] > #
	I1212 00:49:53.023465 1179266 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 00:49:53.023473 1179266 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 00:49:53.023481 1179266 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 00:49:53.023494 1179266 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 00:49:53.023503 1179266 command_runner.go:130] > # reload'.
	I1212 00:49:53.023511 1179266 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 00:49:53.023519 1179266 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 00:49:53.023530 1179266 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 00:49:53.023538 1179266 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 00:49:53.023547 1179266 command_runner.go:130] > [crio]
	I1212 00:49:53.023554 1179266 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 00:49:53.023566 1179266 command_runner.go:130] > # containers images, in this directory.
	I1212 00:49:53.023574 1179266 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 00:49:53.023584 1179266 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 00:49:53.023594 1179266 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1212 00:49:53.023602 1179266 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 00:49:53.023613 1179266 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 00:49:53.023618 1179266 command_runner.go:130] > # storage_driver = "vfs"
	I1212 00:49:53.023629 1179266 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 00:49:53.023636 1179266 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 00:49:53.023644 1179266 command_runner.go:130] > # storage_option = [
	I1212 00:49:53.023649 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.023657 1179266 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 00:49:53.023666 1179266 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 00:49:53.023880 1179266 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 00:49:53.023897 1179266 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 00:49:53.023905 1179266 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 00:49:53.023911 1179266 command_runner.go:130] > # always happen on a node reboot
	I1212 00:49:53.024108 1179266 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 00:49:53.024125 1179266 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 00:49:53.024133 1179266 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 00:49:53.024153 1179266 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 00:49:53.024163 1179266 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 00:49:53.024173 1179266 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 00:49:53.024186 1179266 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 00:49:53.024194 1179266 command_runner.go:130] > # internal_wipe = true
	I1212 00:49:53.024204 1179266 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 00:49:53.024213 1179266 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 00:49:53.024224 1179266 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 00:49:53.024231 1179266 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 00:49:53.024240 1179266 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 00:49:53.024248 1179266 command_runner.go:130] > [crio.api]
	I1212 00:49:53.024255 1179266 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 00:49:53.024260 1179266 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 00:49:53.024270 1179266 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 00:49:53.024276 1179266 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 00:49:53.024290 1179266 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 00:49:53.024302 1179266 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 00:49:53.024307 1179266 command_runner.go:130] > # stream_port = "0"
	I1212 00:49:53.024314 1179266 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 00:49:53.024320 1179266 command_runner.go:130] > # stream_enable_tls = false
	I1212 00:49:53.024331 1179266 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 00:49:53.024339 1179266 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 00:49:53.024351 1179266 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 00:49:53.024363 1179266 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 00:49:53.024368 1179266 command_runner.go:130] > # minutes.
	I1212 00:49:53.024378 1179266 command_runner.go:130] > # stream_tls_cert = ""
	I1212 00:49:53.024386 1179266 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 00:49:53.024397 1179266 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 00:49:53.024403 1179266 command_runner.go:130] > # stream_tls_key = ""
	I1212 00:49:53.024414 1179266 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 00:49:53.024422 1179266 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 00:49:53.024431 1179266 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 00:49:53.024439 1179266 command_runner.go:130] > # stream_tls_ca = ""
	I1212 00:49:53.024453 1179266 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 00:49:53.024459 1179266 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 00:49:53.024472 1179266 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 00:49:53.024477 1179266 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 00:49:53.024514 1179266 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 00:49:53.024527 1179266 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 00:49:53.024532 1179266 command_runner.go:130] > [crio.runtime]
	I1212 00:49:53.024542 1179266 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 00:49:53.024552 1179266 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 00:49:53.024557 1179266 command_runner.go:130] > # "nofile=1024:2048"
	I1212 00:49:53.024569 1179266 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 00:49:53.024575 1179266 command_runner.go:130] > # default_ulimits = [
	I1212 00:49:53.024582 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.024590 1179266 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 00:49:53.024600 1179266 command_runner.go:130] > # no_pivot = false
	I1212 00:49:53.024607 1179266 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 00:49:53.024618 1179266 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 00:49:53.024624 1179266 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 00:49:53.024631 1179266 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 00:49:53.024638 1179266 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 00:49:53.024649 1179266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:49:53.024654 1179266 command_runner.go:130] > # conmon = ""
	I1212 00:49:53.024664 1179266 command_runner.go:130] > # Cgroup setting for conmon
	I1212 00:49:53.024672 1179266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 00:49:53.024680 1179266 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 00:49:53.024690 1179266 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 00:49:53.024699 1179266 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 00:49:53.024707 1179266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:49:53.024712 1179266 command_runner.go:130] > # conmon_env = [
	I1212 00:49:53.024716 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.024727 1179266 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 00:49:53.024733 1179266 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 00:49:53.024744 1179266 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 00:49:53.024749 1179266 command_runner.go:130] > # default_env = [
	I1212 00:49:53.024757 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.024764 1179266 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 00:49:53.024774 1179266 command_runner.go:130] > # selinux = false
	I1212 00:49:53.024782 1179266 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 00:49:53.024798 1179266 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 00:49:53.024809 1179266 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 00:49:53.024814 1179266 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:49:53.024825 1179266 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 00:49:53.024832 1179266 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 00:49:53.024845 1179266 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 00:49:53.024856 1179266 command_runner.go:130] > # which might increase security.
	I1212 00:49:53.024868 1179266 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1212 00:49:53.024876 1179266 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 00:49:53.024887 1179266 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 00:49:53.024895 1179266 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 00:49:53.024906 1179266 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 00:49:53.024913 1179266 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:49:53.025124 1179266 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 00:49:53.025139 1179266 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 00:49:53.025145 1179266 command_runner.go:130] > # the cgroup blockio controller.
	I1212 00:49:53.025161 1179266 command_runner.go:130] > # blockio_config_file = ""
	I1212 00:49:53.025175 1179266 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 00:49:53.025181 1179266 command_runner.go:130] > # irqbalance daemon.
	I1212 00:49:53.025192 1179266 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 00:49:53.025200 1179266 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 00:49:53.025210 1179266 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:49:53.025215 1179266 command_runner.go:130] > # rdt_config_file = ""
	I1212 00:49:53.025222 1179266 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 00:49:53.025233 1179266 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 00:49:53.025254 1179266 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 00:49:53.025266 1179266 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 00:49:53.025274 1179266 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 00:49:53.025285 1179266 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 00:49:53.025290 1179266 command_runner.go:130] > # will be added.
	I1212 00:49:53.025299 1179266 command_runner.go:130] > # default_capabilities = [
	I1212 00:49:53.025311 1179266 command_runner.go:130] > # 	"CHOWN",
	I1212 00:49:53.025317 1179266 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 00:49:53.025322 1179266 command_runner.go:130] > # 	"FSETID",
	I1212 00:49:53.025331 1179266 command_runner.go:130] > # 	"FOWNER",
	I1212 00:49:53.025335 1179266 command_runner.go:130] > # 	"SETGID",
	I1212 00:49:53.025530 1179266 command_runner.go:130] > # 	"SETUID",
	I1212 00:49:53.025542 1179266 command_runner.go:130] > # 	"SETPCAP",
	I1212 00:49:53.025547 1179266 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 00:49:53.025552 1179266 command_runner.go:130] > # 	"KILL",
	I1212 00:49:53.025555 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.025577 1179266 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 00:49:53.025590 1179266 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 00:49:53.025597 1179266 command_runner.go:130] > # add_inheritable_capabilities = true
	I1212 00:49:53.025609 1179266 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 00:49:53.025616 1179266 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:49:53.025628 1179266 command_runner.go:130] > # default_sysctls = [
	I1212 00:49:53.025633 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.025645 1179266 command_runner.go:130] > # List of devices on the host that a
	I1212 00:49:53.025657 1179266 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 00:49:53.025662 1179266 command_runner.go:130] > # allowed_devices = [
	I1212 00:49:53.025670 1179266 command_runner.go:130] > # 	"/dev/fuse",
	I1212 00:49:53.025674 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.025680 1179266 command_runner.go:130] > # List of additional devices. specified as
	I1212 00:49:53.025726 1179266 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 00:49:53.025737 1179266 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 00:49:53.025744 1179266 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:49:53.025751 1179266 command_runner.go:130] > # additional_devices = [
	I1212 00:49:53.025759 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.025765 1179266 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 00:49:53.025770 1179266 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 00:49:53.025778 1179266 command_runner.go:130] > # 	"/etc/cdi",
	I1212 00:49:53.025784 1179266 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 00:49:53.025788 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.025807 1179266 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 00:49:53.025816 1179266 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 00:49:53.025824 1179266 command_runner.go:130] > # Defaults to false.
	I1212 00:49:53.025831 1179266 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 00:49:53.025838 1179266 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 00:49:53.025846 1179266 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 00:49:53.025854 1179266 command_runner.go:130] > # hooks_dir = [
	I1212 00:49:53.025860 1179266 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 00:49:53.025870 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.025881 1179266 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 00:49:53.025889 1179266 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 00:49:53.025899 1179266 command_runner.go:130] > # its default mounts from the following two files:
	I1212 00:49:53.025903 1179266 command_runner.go:130] > #
	I1212 00:49:53.025911 1179266 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 00:49:53.025918 1179266 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 00:49:53.025929 1179266 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 00:49:53.025934 1179266 command_runner.go:130] > #
	I1212 00:49:53.025951 1179266 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 00:49:53.025964 1179266 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 00:49:53.025972 1179266 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 00:49:53.025981 1179266 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 00:49:53.025986 1179266 command_runner.go:130] > #
	I1212 00:49:53.025991 1179266 command_runner.go:130] > # default_mounts_file = ""
	I1212 00:49:53.025998 1179266 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 00:49:53.026006 1179266 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 00:49:53.026022 1179266 command_runner.go:130] > # pids_limit = 0
	I1212 00:49:53.026034 1179266 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 00:49:53.026042 1179266 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 00:49:53.026053 1179266 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 00:49:53.026063 1179266 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 00:49:53.026071 1179266 command_runner.go:130] > # log_size_max = -1
	I1212 00:49:53.026079 1179266 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 00:49:53.026087 1179266 command_runner.go:130] > # log_to_journald = false
	I1212 00:49:53.026103 1179266 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 00:49:53.026110 1179266 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 00:49:53.026119 1179266 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 00:49:53.026344 1179266 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 00:49:53.026361 1179266 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 00:49:53.026378 1179266 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 00:49:53.026385 1179266 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 00:49:53.026394 1179266 command_runner.go:130] > # read_only = false
	I1212 00:49:53.026403 1179266 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 00:49:53.026413 1179266 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 00:49:53.026420 1179266 command_runner.go:130] > # live configuration reload.
	I1212 00:49:53.026428 1179266 command_runner.go:130] > # log_level = "info"
	I1212 00:49:53.026435 1179266 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 00:49:53.026483 1179266 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:49:53.026496 1179266 command_runner.go:130] > # log_filter = ""
	I1212 00:49:53.026504 1179266 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 00:49:53.026517 1179266 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 00:49:53.026526 1179266 command_runner.go:130] > # separated by comma.
	I1212 00:49:53.026531 1179266 command_runner.go:130] > # uid_mappings = ""
	I1212 00:49:53.026538 1179266 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 00:49:53.026545 1179266 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 00:49:53.026562 1179266 command_runner.go:130] > # separated by comma.
	I1212 00:49:53.026568 1179266 command_runner.go:130] > # gid_mappings = ""
	I1212 00:49:53.026580 1179266 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 00:49:53.026588 1179266 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:49:53.026598 1179266 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:49:53.026604 1179266 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 00:49:53.026615 1179266 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 00:49:53.026623 1179266 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:49:53.026636 1179266 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:49:53.026645 1179266 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 00:49:53.026653 1179266 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 00:49:53.026664 1179266 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 00:49:53.026671 1179266 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 00:49:53.026681 1179266 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 00:49:53.026689 1179266 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 00:49:53.026700 1179266 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 00:49:53.026713 1179266 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 00:49:53.026719 1179266 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 00:49:53.026729 1179266 command_runner.go:130] > # drop_infra_ctr = true
	I1212 00:49:53.026736 1179266 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 00:49:53.026744 1179266 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 00:49:53.026759 1179266 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 00:49:53.026767 1179266 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 00:49:53.026775 1179266 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 00:49:53.026796 1179266 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 00:49:53.027032 1179266 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 00:49:53.027052 1179266 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 00:49:53.027058 1179266 command_runner.go:130] > # pinns_path = ""
	I1212 00:49:53.027066 1179266 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:49:53.027078 1179266 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 00:49:53.027086 1179266 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 00:49:53.027110 1179266 command_runner.go:130] > # default_runtime = "runc"
	I1212 00:49:53.027122 1179266 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 00:49:53.027131 1179266 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 00:49:53.027146 1179266 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 00:49:53.027153 1179266 command_runner.go:130] > # creation as a file is not desired either.
	I1212 00:49:53.027163 1179266 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 00:49:53.027177 1179266 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 00:49:53.027188 1179266 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 00:49:53.027192 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.027200 1179266 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 00:49:53.027211 1179266 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 00:49:53.027219 1179266 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 00:49:53.027231 1179266 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 00:49:53.027235 1179266 command_runner.go:130] > #
	I1212 00:49:53.027246 1179266 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 00:49:53.027256 1179266 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 00:49:53.027261 1179266 command_runner.go:130] > #  runtime_type = "oci"
	I1212 00:49:53.027272 1179266 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 00:49:53.027280 1179266 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 00:49:53.027289 1179266 command_runner.go:130] > #  allowed_annotations = []
	I1212 00:49:53.027294 1179266 command_runner.go:130] > # Where:
	I1212 00:49:53.027300 1179266 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 00:49:53.027313 1179266 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 00:49:53.027327 1179266 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 00:49:53.027335 1179266 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 00:49:53.027343 1179266 command_runner.go:130] > #   in $PATH.
	I1212 00:49:53.027350 1179266 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 00:49:53.027359 1179266 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 00:49:53.027367 1179266 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 00:49:53.027375 1179266 command_runner.go:130] > #   state.
	I1212 00:49:53.027383 1179266 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 00:49:53.027395 1179266 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 00:49:53.027407 1179266 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 00:49:53.027414 1179266 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 00:49:53.027422 1179266 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 00:49:53.027434 1179266 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 00:49:53.027442 1179266 command_runner.go:130] > #   The currently recognized values are:
	I1212 00:49:53.027454 1179266 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 00:49:53.027463 1179266 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 00:49:53.027479 1179266 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 00:49:53.027486 1179266 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 00:49:53.027495 1179266 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 00:49:53.027505 1179266 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 00:49:53.027516 1179266 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 00:49:53.027525 1179266 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 00:49:53.027534 1179266 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 00:49:53.027545 1179266 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 00:49:53.027552 1179266 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1212 00:49:53.027561 1179266 command_runner.go:130] > runtime_type = "oci"
	I1212 00:49:53.027566 1179266 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 00:49:53.027571 1179266 command_runner.go:130] > runtime_config_path = ""
	I1212 00:49:53.027575 1179266 command_runner.go:130] > monitor_path = ""
	I1212 00:49:53.027580 1179266 command_runner.go:130] > monitor_cgroup = ""
	I1212 00:49:53.027585 1179266 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:49:53.027628 1179266 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 00:49:53.027640 1179266 command_runner.go:130] > # running containers
	I1212 00:49:53.027646 1179266 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 00:49:53.027653 1179266 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 00:49:53.027663 1179266 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 00:49:53.027672 1179266 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 00:49:53.027679 1179266 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 00:49:53.027684 1179266 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 00:49:53.027696 1179266 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 00:49:53.027705 1179266 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 00:49:53.027711 1179266 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 00:49:53.027719 1179266 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 00:49:53.027806 1179266 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 00:49:53.027822 1179266 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 00:49:53.027830 1179266 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 00:49:53.027839 1179266 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 00:49:53.027852 1179266 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 00:49:53.027859 1179266 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 00:49:53.027882 1179266 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 00:49:53.027896 1179266 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 00:49:53.027904 1179266 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 00:49:53.027915 1179266 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 00:49:53.027919 1179266 command_runner.go:130] > # Example:
	I1212 00:49:53.027925 1179266 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 00:49:53.027933 1179266 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 00:49:53.027940 1179266 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 00:49:53.027956 1179266 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 00:49:53.027960 1179266 command_runner.go:130] > # cpuset = 0
	I1212 00:49:53.027967 1179266 command_runner.go:130] > # cpushares = "0-1"
	I1212 00:49:53.027972 1179266 command_runner.go:130] > # Where:
	I1212 00:49:53.027979 1179266 command_runner.go:130] > # The workload name is workload-type.
	I1212 00:49:53.027990 1179266 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 00:49:53.027997 1179266 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 00:49:53.028004 1179266 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 00:49:53.028023 1179266 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 00:49:53.028034 1179266 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 00:49:53.028040 1179266 command_runner.go:130] > # 
	I1212 00:49:53.028051 1179266 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 00:49:53.028055 1179266 command_runner.go:130] > #
	I1212 00:49:53.028063 1179266 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 00:49:53.028071 1179266 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 00:49:53.028079 1179266 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 00:49:53.028086 1179266 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 00:49:53.028099 1179266 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 00:49:53.028104 1179266 command_runner.go:130] > [crio.image]
	I1212 00:49:53.028111 1179266 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 00:49:53.028117 1179266 command_runner.go:130] > # default_transport = "docker://"
	I1212 00:49:53.028127 1179266 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 00:49:53.028135 1179266 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:49:53.028142 1179266 command_runner.go:130] > # global_auth_file = ""
	I1212 00:49:53.028151 1179266 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 00:49:53.028158 1179266 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:49:53.028170 1179266 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 00:49:53.028183 1179266 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 00:49:53.028193 1179266 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:49:53.028199 1179266 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:49:53.028208 1179266 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 00:49:53.028216 1179266 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 00:49:53.028227 1179266 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 00:49:53.028235 1179266 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 00:49:53.028250 1179266 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 00:49:53.028259 1179266 command_runner.go:130] > # pause_command = "/pause"
	I1212 00:49:53.028266 1179266 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 00:49:53.028274 1179266 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 00:49:53.028284 1179266 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 00:49:53.028292 1179266 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 00:49:53.028301 1179266 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 00:49:53.028306 1179266 command_runner.go:130] > # signature_policy = ""
	I1212 00:49:53.028321 1179266 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 00:49:53.028332 1179266 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 00:49:53.028338 1179266 command_runner.go:130] > # changing them here.
	I1212 00:49:53.028346 1179266 command_runner.go:130] > # insecure_registries = [
	I1212 00:49:53.028353 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.028361 1179266 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 00:49:53.028371 1179266 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 00:49:53.028697 1179266 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 00:49:53.028715 1179266 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 00:49:53.028721 1179266 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 00:49:53.028729 1179266 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 00:49:53.028734 1179266 command_runner.go:130] > # CNI plugins.
	I1212 00:49:53.028738 1179266 command_runner.go:130] > [crio.network]
	I1212 00:49:53.028748 1179266 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 00:49:53.028755 1179266 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 00:49:53.028769 1179266 command_runner.go:130] > # cni_default_network = ""
	I1212 00:49:53.028777 1179266 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 00:49:53.028786 1179266 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 00:49:53.028793 1179266 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 00:49:53.028801 1179266 command_runner.go:130] > # plugin_dirs = [
	I1212 00:49:53.028805 1179266 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 00:49:53.028810 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.028827 1179266 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 00:49:53.028842 1179266 command_runner.go:130] > [crio.metrics]
	I1212 00:49:53.028851 1179266 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 00:49:53.028858 1179266 command_runner.go:130] > # enable_metrics = false
	I1212 00:49:53.028867 1179266 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 00:49:53.028872 1179266 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 00:49:53.028884 1179266 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 00:49:53.028891 1179266 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 00:49:53.028898 1179266 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 00:49:53.028903 1179266 command_runner.go:130] > # metrics_collectors = [
	I1212 00:49:53.028916 1179266 command_runner.go:130] > # 	"operations",
	I1212 00:49:53.028922 1179266 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 00:49:53.028931 1179266 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 00:49:53.028937 1179266 command_runner.go:130] > # 	"operations_errors",
	I1212 00:49:53.028944 1179266 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 00:49:53.028952 1179266 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 00:49:53.028957 1179266 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 00:49:53.028963 1179266 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 00:49:53.029151 1179266 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 00:49:53.029178 1179266 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 00:49:53.029184 1179266 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 00:49:53.029189 1179266 command_runner.go:130] > # 	"containers_oom_total",
	I1212 00:49:53.029194 1179266 command_runner.go:130] > # 	"containers_oom",
	I1212 00:49:53.029199 1179266 command_runner.go:130] > # 	"processes_defunct",
	I1212 00:49:53.029204 1179266 command_runner.go:130] > # 	"operations_total",
	I1212 00:49:53.029209 1179266 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 00:49:53.029214 1179266 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 00:49:53.029220 1179266 command_runner.go:130] > # 	"operations_errors_total",
	I1212 00:49:53.029228 1179266 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 00:49:53.029246 1179266 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 00:49:53.029260 1179266 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 00:49:53.029266 1179266 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 00:49:53.029271 1179266 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 00:49:53.029276 1179266 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 00:49:53.029280 1179266 command_runner.go:130] > # ]
	I1212 00:49:53.029287 1179266 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 00:49:53.029293 1179266 command_runner.go:130] > # metrics_port = 9090
	I1212 00:49:53.029300 1179266 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 00:49:53.029307 1179266 command_runner.go:130] > # metrics_socket = ""
	I1212 00:49:53.029320 1179266 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 00:49:53.029332 1179266 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 00:49:53.029340 1179266 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 00:49:53.029349 1179266 command_runner.go:130] > # certificate on any modification event.
	I1212 00:49:53.029353 1179266 command_runner.go:130] > # metrics_cert = ""
	I1212 00:49:53.029362 1179266 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 00:49:53.029368 1179266 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 00:49:53.029374 1179266 command_runner.go:130] > # metrics_key = ""
	I1212 00:49:53.029388 1179266 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 00:49:53.029396 1179266 command_runner.go:130] > [crio.tracing]
	I1212 00:49:53.029403 1179266 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 00:49:53.029410 1179266 command_runner.go:130] > # enable_tracing = false
	I1212 00:49:53.029417 1179266 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 00:49:53.029426 1179266 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 00:49:53.029432 1179266 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 00:49:53.029443 1179266 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 00:49:53.029451 1179266 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 00:49:53.029456 1179266 command_runner.go:130] > [crio.stats]
	I1212 00:49:53.029473 1179266 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 00:49:53.029480 1179266 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 00:49:53.029488 1179266 command_runner.go:130] > # stats_collection_period = 0
	I1212 00:49:53.031332 1179266 command_runner.go:130] ! time="2023-12-12 00:49:53.021056446Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1212 00:49:53.031358 1179266 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 00:49:53.031751 1179266 cni.go:84] Creating CNI manager for ""
	I1212 00:49:53.031770 1179266 cni.go:136] 1 nodes found, recommending kindnet
	I1212 00:49:53.031802 1179266 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:49:53.031832 1179266 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-270339 NodeName:multinode-270339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:49:53.032015 1179266 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-270339"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:49:53.032101 1179266 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-270339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-270339 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 00:49:53.032176 1179266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:49:53.041592 1179266 command_runner.go:130] > kubeadm
	I1212 00:49:53.041611 1179266 command_runner.go:130] > kubectl
	I1212 00:49:53.041615 1179266 command_runner.go:130] > kubelet
	I1212 00:49:53.042872 1179266 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:49:53.042951 1179266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:49:53.053548 1179266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1212 00:49:53.074773 1179266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:49:53.095953 1179266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1212 00:49:53.117217 1179266 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:49:53.121721 1179266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:49:53.135401 1179266 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339 for IP: 192.168.58.2
	I1212 00:49:53.135432 1179266 certs.go:190] acquiring lock for shared ca certs: {Name:mk50788b4819ee46b65351495e43cdf246a6ddce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:49:53.135638 1179266 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key
	I1212 00:49:53.135713 1179266 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key
	I1212 00:49:53.135802 1179266 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.key
	I1212 00:49:53.135819 1179266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.crt with IP's: []
	I1212 00:49:53.587777 1179266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.crt ...
	I1212 00:49:53.587810 1179266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.crt: {Name:mkbd1dfa63f35ba95384c07ec80b9f22ce953245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:49:53.588006 1179266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.key ...
	I1212 00:49:53.588018 1179266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.key: {Name:mk9af99f852842a5224d4aceac3de0e491bb5b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:49:53.588109 1179266 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.key.cee25041
	I1212 00:49:53.588125 1179266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 00:49:54.067425 1179266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.crt.cee25041 ...
	I1212 00:49:54.067457 1179266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.crt.cee25041: {Name:mk8a2f307bc07f95afe8e241a29155546c77ac2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:49:54.067645 1179266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.key.cee25041 ...
	I1212 00:49:54.067659 1179266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.key.cee25041: {Name:mkcdd7359e3c42756454d23903ae9073032219b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:49:54.067733 1179266 certs.go:337] copying /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.crt
	I1212 00:49:54.067820 1179266 certs.go:341] copying /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.key
	I1212 00:49:54.067888 1179266 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/proxy-client.key
	I1212 00:49:54.067907 1179266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/proxy-client.crt with IP's: []
	I1212 00:49:54.413420 1179266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/proxy-client.crt ...
	I1212 00:49:54.413451 1179266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/proxy-client.crt: {Name:mk664d6271a8f037d79944ad8b66e65be4185bfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:49:54.413630 1179266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/proxy-client.key ...
	I1212 00:49:54.413644 1179266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/proxy-client.key: {Name:mka066e2935a1776f00424cd40c160627887f48c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:49:54.413731 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:49:54.413753 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:49:54.413765 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:49:54.413780 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:49:54.413791 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:49:54.413809 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:49:54.413825 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:49:54.413839 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:49:54.413891 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem (1338 bytes)
	W1212 00:49:54.413931 1179266 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383_empty.pem, impossibly tiny 0 bytes
	I1212 00:49:54.413947 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:49:54.413974 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:49:54.414002 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:49:54.414032 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem (1679 bytes)
	I1212 00:49:54.414088 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 00:49:54.414119 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> /usr/share/ca-certificates/11173832.pem
	I1212 00:49:54.414134 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:49:54.414147 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem -> /usr/share/ca-certificates/1117383.pem
	I1212 00:49:54.415384 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:49:54.445353 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:49:54.474134 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:49:54.504440 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:49:54.533324 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:49:54.562821 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:49:54.591531 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:49:54.619627 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:49:54.648340 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /usr/share/ca-certificates/11173832.pem (1708 bytes)
	I1212 00:49:54.676679 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:49:54.704503 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem --> /usr/share/ca-certificates/1117383.pem (1338 bytes)
	I1212 00:49:54.732280 1179266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:49:54.753069 1179266 ssh_runner.go:195] Run: openssl version
	I1212 00:49:54.760059 1179266 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1212 00:49:54.760158 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11173832.pem && ln -fs /usr/share/ca-certificates/11173832.pem /etc/ssl/certs/11173832.pem"
	I1212 00:49:54.771656 1179266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11173832.pem
	I1212 00:49:54.776105 1179266 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:25 /usr/share/ca-certificates/11173832.pem
	I1212 00:49:54.776138 1179266 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:25 /usr/share/ca-certificates/11173832.pem
	I1212 00:49:54.776198 1179266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11173832.pem
	I1212 00:49:54.784203 1179266 command_runner.go:130] > 3ec20f2e
	I1212 00:49:54.784606 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11173832.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:49:54.795978 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:49:54.807160 1179266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:49:54.811714 1179266 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:49:54.812026 1179266 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:49:54.812103 1179266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:49:54.820566 1179266 command_runner.go:130] > b5213941
	I1212 00:49:54.820982 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:49:54.832593 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117383.pem && ln -fs /usr/share/ca-certificates/1117383.pem /etc/ssl/certs/1117383.pem"
	I1212 00:49:54.844209 1179266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117383.pem
	I1212 00:49:54.848567 1179266 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:25 /usr/share/ca-certificates/1117383.pem
	I1212 00:49:54.848823 1179266 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:25 /usr/share/ca-certificates/1117383.pem
	I1212 00:49:54.848884 1179266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117383.pem
	I1212 00:49:54.857072 1179266 command_runner.go:130] > 51391683
	I1212 00:49:54.857709 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117383.pem /etc/ssl/certs/51391683.0"
	I1212 00:49:54.869194 1179266 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:49:54.873737 1179266 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 00:49:54.873777 1179266 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 00:49:54.873817 1179266 kubeadm.go:404] StartCluster: {Name:multinode-270339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-270339 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:49:54.873898 1179266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:49:54.873968 1179266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:49:54.915912 1179266 cri.go:89] found id: ""
	I1212 00:49:54.915982 1179266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:49:54.926637 1179266 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 00:49:54.926666 1179266 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 00:49:54.926675 1179266 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 00:49:54.926750 1179266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:49:54.937294 1179266 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:49:54.937364 1179266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:49:54.948105 1179266 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 00:49:54.948132 1179266 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 00:49:54.948141 1179266 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 00:49:54.948153 1179266 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:49:54.948184 1179266 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:49:54.948215 1179266 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:49:55.000678 1179266 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 00:49:55.000713 1179266 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 00:49:55.000908 1179266 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 00:49:55.000934 1179266 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 00:49:55.045774 1179266 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:49:55.045843 1179266 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:49:55.045948 1179266 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1212 00:49:55.045981 1179266 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I1212 00:49:55.046059 1179266 kubeadm.go:322] OS: Linux
	I1212 00:49:55.046081 1179266 command_runner.go:130] > OS: Linux
	I1212 00:49:55.046151 1179266 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 00:49:55.046184 1179266 command_runner.go:130] > CGROUPS_CPU: enabled
	I1212 00:49:55.046265 1179266 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 00:49:55.046287 1179266 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1212 00:49:55.046369 1179266 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 00:49:55.046398 1179266 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1212 00:49:55.046479 1179266 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 00:49:55.046501 1179266 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1212 00:49:55.046574 1179266 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 00:49:55.046596 1179266 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1212 00:49:55.046688 1179266 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 00:49:55.046710 1179266 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1212 00:49:55.046780 1179266 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1212 00:49:55.046801 1179266 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1212 00:49:55.046894 1179266 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1212 00:49:55.046914 1179266 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1212 00:49:55.046995 1179266 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1212 00:49:55.047024 1179266 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1212 00:49:55.128945 1179266 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:49:55.128972 1179266 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:49:55.129061 1179266 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:49:55.129071 1179266 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:49:55.129157 1179266 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:49:55.129166 1179266 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:49:55.373655 1179266 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:49:55.373721 1179266 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:49:55.378577 1179266 out.go:204]   - Generating certificates and keys ...
	I1212 00:49:55.378669 1179266 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 00:49:55.378738 1179266 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 00:49:55.378800 1179266 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 00:49:55.378805 1179266 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 00:49:55.622799 1179266 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:49:55.622867 1179266 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:49:56.252879 1179266 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:49:56.252908 1179266 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:49:56.810139 1179266 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:49:56.810173 1179266 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 00:49:57.235127 1179266 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 00:49:57.235150 1179266 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 00:49:57.943466 1179266 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 00:49:57.943489 1179266 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 00:49:57.943862 1179266 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-270339] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1212 00:49:57.943875 1179266 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-270339] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1212 00:49:58.381642 1179266 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 00:49:58.381666 1179266 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 00:49:58.382023 1179266 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-270339] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1212 00:49:58.382036 1179266 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-270339] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1212 00:49:58.829176 1179266 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:49:58.829200 1179266 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:49:59.425397 1179266 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:49:59.425423 1179266 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:49:59.725965 1179266 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 00:49:59.725990 1179266 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 00:49:59.726303 1179266 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:49:59.726317 1179266 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:50:00.111727 1179266 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:50:00.111754 1179266 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:50:00.491185 1179266 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:50:00.491210 1179266 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:50:00.904736 1179266 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:50:00.904772 1179266 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:50:01.275646 1179266 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:50:01.275671 1179266 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:50:01.276312 1179266 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:50:01.276337 1179266 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:50:01.279063 1179266 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:50:01.281876 1179266 out.go:204]   - Booting up control plane ...
	I1212 00:50:01.279173 1179266 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:50:01.281985 1179266 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:50:01.281999 1179266 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:50:01.282125 1179266 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:50:01.282136 1179266 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:50:01.282641 1179266 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:50:01.282655 1179266 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:50:01.295150 1179266 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:50:01.295185 1179266 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:50:01.295934 1179266 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:50:01.295963 1179266 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:50:01.296280 1179266 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 00:50:01.296296 1179266 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 00:50:01.398388 1179266 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:50:01.398416 1179266 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:50:09.901404 1179266 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502461 seconds
	I1212 00:50:09.901431 1179266 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.502461 seconds
	I1212 00:50:09.901531 1179266 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:50:09.901536 1179266 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:50:09.917539 1179266 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:50:09.917575 1179266 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:50:10.441877 1179266 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:50:10.441903 1179266 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:50:10.442074 1179266 kubeadm.go:322] [mark-control-plane] Marking the node multinode-270339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:50:10.442080 1179266 command_runner.go:130] > [mark-control-plane] Marking the node multinode-270339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:50:10.954038 1179266 kubeadm.go:322] [bootstrap-token] Using token: nxnh55.shu4lnek3rgmyz2d
	I1212 00:50:10.956271 1179266 out.go:204]   - Configuring RBAC rules ...
	I1212 00:50:10.954150 1179266 command_runner.go:130] > [bootstrap-token] Using token: nxnh55.shu4lnek3rgmyz2d
	I1212 00:50:10.956384 1179266 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:50:10.956393 1179266 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:50:10.960962 1179266 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:50:10.960981 1179266 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:50:10.969911 1179266 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:50:10.969935 1179266 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:50:10.974137 1179266 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:50:10.974183 1179266 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:50:10.977983 1179266 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:50:10.978008 1179266 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:50:10.983593 1179266 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:50:10.983619 1179266 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:50:10.998625 1179266 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:50:10.998654 1179266 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:50:11.233594 1179266 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 00:50:11.233622 1179266 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 00:50:11.385772 1179266 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 00:50:11.385797 1179266 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 00:50:11.385804 1179266 kubeadm.go:322] 
	I1212 00:50:11.385861 1179266 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 00:50:11.385869 1179266 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 00:50:11.385873 1179266 kubeadm.go:322] 
	I1212 00:50:11.385950 1179266 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 00:50:11.385958 1179266 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 00:50:11.385962 1179266 kubeadm.go:322] 
	I1212 00:50:11.385987 1179266 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 00:50:11.385995 1179266 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 00:50:11.386050 1179266 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:50:11.386059 1179266 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:50:11.386106 1179266 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:50:11.386114 1179266 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:50:11.386118 1179266 kubeadm.go:322] 
	I1212 00:50:11.386177 1179266 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 00:50:11.386185 1179266 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 00:50:11.386189 1179266 kubeadm.go:322] 
	I1212 00:50:11.386234 1179266 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:50:11.386242 1179266 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:50:11.386247 1179266 kubeadm.go:322] 
	I1212 00:50:11.386296 1179266 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 00:50:11.386305 1179266 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 00:50:11.386375 1179266 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:50:11.386382 1179266 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:50:11.386445 1179266 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:50:11.386454 1179266 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:50:11.386463 1179266 kubeadm.go:322] 
	I1212 00:50:11.386542 1179266 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:50:11.386551 1179266 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:50:11.386622 1179266 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 00:50:11.386630 1179266 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 00:50:11.386634 1179266 kubeadm.go:322] 
	I1212 00:50:11.386713 1179266 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nxnh55.shu4lnek3rgmyz2d \
	I1212 00:50:11.386725 1179266 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token nxnh55.shu4lnek3rgmyz2d \
	I1212 00:50:11.386822 1179266 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 \
	I1212 00:50:11.386830 1179266 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 \
	I1212 00:50:11.386849 1179266 kubeadm.go:322] 	--control-plane 
	I1212 00:50:11.386858 1179266 command_runner.go:130] > 	--control-plane 
	I1212 00:50:11.386862 1179266 kubeadm.go:322] 
	I1212 00:50:11.386942 1179266 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:50:11.386950 1179266 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:50:11.386954 1179266 kubeadm.go:322] 
	I1212 00:50:11.387031 1179266 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nxnh55.shu4lnek3rgmyz2d \
	I1212 00:50:11.387039 1179266 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token nxnh55.shu4lnek3rgmyz2d \
	I1212 00:50:11.387135 1179266 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 
	I1212 00:50:11.387141 1179266 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 
	I1212 00:50:11.388143 1179266 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1212 00:50:11.388166 1179266 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1212 00:50:11.388267 1179266 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:50:11.388276 1179266 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:50:11.388288 1179266 cni.go:84] Creating CNI manager for ""
	I1212 00:50:11.388296 1179266 cni.go:136] 1 nodes found, recommending kindnet
	I1212 00:50:11.391684 1179266 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:50:11.393504 1179266 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:50:11.406028 1179266 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 00:50:11.406052 1179266 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1212 00:50:11.406060 1179266 command_runner.go:130] > Device: 3ah/58d	Inode: 1572675     Links: 1
	I1212 00:50:11.406068 1179266 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:50:11.406084 1179266 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1212 00:50:11.406091 1179266 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1212 00:50:11.406097 1179266 command_runner.go:130] > Change: 2023-12-12 00:11:51.729537575 +0000
	I1212 00:50:11.406103 1179266 command_runner.go:130] >  Birth: 2023-12-12 00:11:51.689538767 +0000
	I1212 00:50:11.409815 1179266 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:50:11.409831 1179266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:50:11.455307 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:50:12.292335 1179266 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 00:50:12.301214 1179266 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 00:50:12.315798 1179266 command_runner.go:130] > serviceaccount/kindnet created
	I1212 00:50:12.327254 1179266 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 00:50:12.332877 1179266 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:50:12.333005 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:12.333083 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4 minikube.k8s.io/name=multinode-270339 minikube.k8s.io/updated_at=2023_12_12T00_50_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:12.539144 1179266 command_runner.go:130] > node/multinode-270339 labeled
	I1212 00:50:12.542925 1179266 command_runner.go:130] > -16
	I1212 00:50:12.542959 1179266 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 00:50:12.542981 1179266 ops.go:34] apiserver oom_adj: -16
	I1212 00:50:12.543052 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:12.652767 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:12.652857 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:12.739230 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:13.239524 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:13.324915 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:13.739943 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:13.829419 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:14.239539 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:14.324548 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:14.739566 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:14.828603 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:15.240331 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:15.344776 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:15.740404 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:15.828045 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:16.239444 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:16.330785 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:16.740143 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:16.826936 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:17.239604 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:17.330235 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:17.739905 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:17.834321 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:18.239970 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:18.329522 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:18.739786 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:18.830960 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:19.240282 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:19.327961 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:19.739556 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:19.832484 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:20.239640 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:20.324378 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:20.739764 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:20.832514 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:21.239838 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:21.330835 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:21.739386 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:21.843364 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:22.239990 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:22.341598 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:22.740194 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:22.836908 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:23.240432 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:23.339411 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:23.739992 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:23.827071 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:24.239887 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:24.335753 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:24.740298 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:24.859756 1179266 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 00:50:25.240341 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:25.351350 1179266 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 00:50:25.351371 1179266 command_runner.go:130] > default   0         0s
	I1212 00:50:25.351386 1179266 kubeadm.go:1088] duration metric: took 13.01843334s to wait for elevateKubeSystemPrivileges.
	I1212 00:50:25.351398 1179266 kubeadm.go:406] StartCluster complete in 30.477586491s
	I1212 00:50:25.351414 1179266 settings.go:142] acquiring lock: {Name:mk4639df610f4394c6679c82a1803a108086063e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:50:25.351478 1179266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:50:25.352136 1179266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-1111943/kubeconfig: {Name:mk6bda1f8356012618f11e41d531a3f786e443d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:50:25.352615 1179266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:50:25.352883 1179266 kapi.go:59] client config for multinode-270339: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:50:25.353338 1179266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:50:25.353626 1179266 config.go:182] Loaded profile config "multinode-270339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:50:25.353782 1179266 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 00:50:25.353868 1179266 addons.go:69] Setting storage-provisioner=true in profile "multinode-270339"
	I1212 00:50:25.353884 1179266 addons.go:231] Setting addon storage-provisioner=true in "multinode-270339"
	I1212 00:50:25.353937 1179266 host.go:66] Checking if "multinode-270339" exists ...
	I1212 00:50:25.354409 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339 --format={{.State.Status}}
	I1212 00:50:25.354841 1179266 addons.go:69] Setting default-storageclass=true in profile "multinode-270339"
	I1212 00:50:25.354861 1179266 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-270339"
	I1212 00:50:25.355120 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339 --format={{.State.Status}}
	I1212 00:50:25.355823 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 00:50:25.355838 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:25.355847 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:25.355857 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:25.356057 1179266 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 00:50:25.394966 1179266 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:50:25.397871 1179266 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:50:25.397897 1179266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:50:25.397962 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:50:25.399759 1179266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:50:25.400018 1179266 kapi.go:59] client config for multinode-270339: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:50:25.400342 1179266 addons.go:231] Setting addon default-storageclass=true in "multinode-270339"
	I1212 00:50:25.400379 1179266 host.go:66] Checking if "multinode-270339" exists ...
	I1212 00:50:25.400814 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339 --format={{.State.Status}}
	I1212 00:50:25.413525 1179266 round_trippers.go:574] Response Status: 200 OK in 57 milliseconds
	I1212 00:50:25.413547 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:25.413556 1179266 round_trippers.go:580]     Audit-Id: 09614909-c44a-4b08-a197-1b68a985ea3f
	I1212 00:50:25.413562 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:25.413569 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:25.413575 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:25.413581 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:25.413587 1179266 round_trippers.go:580]     Content-Length: 291
	I1212 00:50:25.413593 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:25 GMT
	I1212 00:50:25.413619 1179266 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"069b2802-3295-4313-81b9-da639a5d7429","resourceVersion":"313","creationTimestamp":"2023-12-12T00:50:11Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 00:50:25.413997 1179266 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"069b2802-3295-4313-81b9-da639a5d7429","resourceVersion":"313","creationTimestamp":"2023-12-12T00:50:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 00:50:25.414044 1179266 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 00:50:25.414050 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:25.414058 1179266 round_trippers.go:473]     Content-Type: application/json
	I1212 00:50:25.414064 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:25.414071 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:25.437657 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34085 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa Username:docker}
	I1212 00:50:25.450222 1179266 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:50:25.450248 1179266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:50:25.450309 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:50:25.476513 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34085 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa Username:docker}
	I1212 00:50:25.518954 1179266 round_trippers.go:574] Response Status: 200 OK in 104 milliseconds
	I1212 00:50:25.518975 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:25.518984 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:25.518990 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:25.519001 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:25.519011 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:25.519018 1179266 round_trippers.go:580]     Content-Length: 291
	I1212 00:50:25.519026 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:25 GMT
	I1212 00:50:25.519033 1179266 round_trippers.go:580]     Audit-Id: cefdb728-ba2b-414d-b444-62ff02c4e198
	I1212 00:50:25.523731 1179266 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"069b2802-3295-4313-81b9-da639a5d7429","resourceVersion":"328","creationTimestamp":"2023-12-12T00:50:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 00:50:25.523899 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 00:50:25.523914 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:25.523923 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:25.523930 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:25.564359 1179266 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I1212 00:50:25.564382 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:25.564391 1179266 round_trippers.go:580]     Content-Length: 291
	I1212 00:50:25.564397 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:25 GMT
	I1212 00:50:25.564404 1179266 round_trippers.go:580]     Audit-Id: 5269a774-4d9d-4576-bab1-913eb9afab02
	I1212 00:50:25.564410 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:25.564417 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:25.564423 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:25.564436 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:25.566444 1179266 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"069b2802-3295-4313-81b9-da639a5d7429","resourceVersion":"328","creationTimestamp":"2023-12-12T00:50:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 00:50:25.566554 1179266 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-270339" context rescaled to 1 replicas
	I1212 00:50:25.566585 1179266 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:50:25.569278 1179266 out.go:177] * Verifying Kubernetes components...
	I1212 00:50:25.572207 1179266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:50:25.598047 1179266 command_runner.go:130] > apiVersion: v1
	I1212 00:50:25.598065 1179266 command_runner.go:130] > data:
	I1212 00:50:25.598071 1179266 command_runner.go:130] >   Corefile: |
	I1212 00:50:25.598076 1179266 command_runner.go:130] >     .:53 {
	I1212 00:50:25.598081 1179266 command_runner.go:130] >         errors
	I1212 00:50:25.598086 1179266 command_runner.go:130] >         health {
	I1212 00:50:25.598092 1179266 command_runner.go:130] >            lameduck 5s
	I1212 00:50:25.598097 1179266 command_runner.go:130] >         }
	I1212 00:50:25.598102 1179266 command_runner.go:130] >         ready
	I1212 00:50:25.598109 1179266 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 00:50:25.598116 1179266 command_runner.go:130] >            pods insecure
	I1212 00:50:25.598123 1179266 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 00:50:25.598130 1179266 command_runner.go:130] >            ttl 30
	I1212 00:50:25.598135 1179266 command_runner.go:130] >         }
	I1212 00:50:25.598145 1179266 command_runner.go:130] >         prometheus :9153
	I1212 00:50:25.598152 1179266 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 00:50:25.598169 1179266 command_runner.go:130] >            max_concurrent 1000
	I1212 00:50:25.598177 1179266 command_runner.go:130] >         }
	I1212 00:50:25.598182 1179266 command_runner.go:130] >         cache 30
	I1212 00:50:25.598190 1179266 command_runner.go:130] >         loop
	I1212 00:50:25.598201 1179266 command_runner.go:130] >         reload
	I1212 00:50:25.598208 1179266 command_runner.go:130] >         loadbalance
	I1212 00:50:25.598213 1179266 command_runner.go:130] >     }
	I1212 00:50:25.598222 1179266 command_runner.go:130] > kind: ConfigMap
	I1212 00:50:25.598227 1179266 command_runner.go:130] > metadata:
	I1212 00:50:25.598237 1179266 command_runner.go:130] >   creationTimestamp: "2023-12-12T00:50:11Z"
	I1212 00:50:25.598245 1179266 command_runner.go:130] >   name: coredns
	I1212 00:50:25.598250 1179266 command_runner.go:130] >   namespace: kube-system
	I1212 00:50:25.598259 1179266 command_runner.go:130] >   resourceVersion: "217"
	I1212 00:50:25.598265 1179266 command_runner.go:130] >   uid: d419f839-5a47-4d8f-92e8-7294bed35f64
	I1212 00:50:25.601801 1179266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:50:25.602217 1179266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:50:25.602467 1179266 kapi.go:59] client config for multinode-270339: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:50:25.602712 1179266 node_ready.go:35] waiting up to 6m0s for node "multinode-270339" to be "Ready" ...
	I1212 00:50:25.602802 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:25.602812 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:25.602821 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:25.602833 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:25.635929 1179266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:50:25.655320 1179266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:50:25.691757 1179266 round_trippers.go:574] Response Status: 200 OK in 88 milliseconds
	I1212 00:50:25.691782 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:25.691792 1179266 round_trippers.go:580]     Audit-Id: 0bbdeea3-e0df-4dd9-b36b-c92ae8f1958f
	I1212 00:50:25.691798 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:25.691811 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:25.691822 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:25.691832 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:25.691841 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:25 GMT
	I1212 00:50:25.712250 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"319","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1212 00:50:25.712949 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:25.712967 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:25.712977 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:25.712985 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:25.774441 1179266 round_trippers.go:574] Response Status: 200 OK in 61 milliseconds
	I1212 00:50:25.774469 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:25.774479 1179266 round_trippers.go:580]     Audit-Id: 26d7d9d5-5027-459e-8cda-5ebd38019695
	I1212 00:50:25.774485 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:25.774492 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:25.774499 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:25.774506 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:25.774514 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:25 GMT
	I1212 00:50:25.777708 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"319","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1212 00:50:26.209728 1179266 command_runner.go:130] > configmap/coredns replaced
	I1212 00:50:26.216069 1179266 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1212 00:50:26.278338 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:26.278365 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:26.278375 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:26.278382 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:26.281073 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:26.281094 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:26.281103 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:26.281109 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:26.281116 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:26.281131 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:26.281147 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:26 GMT
	I1212 00:50:26.281154 1179266 round_trippers.go:580]     Audit-Id: 5c529077-d9bc-45fa-8c0c-a23c9be53f66
	I1212 00:50:26.281393 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"319","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1212 00:50:26.311820 1179266 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 00:50:26.318118 1179266 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 00:50:26.327727 1179266 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 00:50:26.337454 1179266 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 00:50:26.346051 1179266 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 00:50:26.356373 1179266 command_runner.go:130] > pod/storage-provisioner created
	I1212 00:50:26.363252 1179266 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 00:50:26.363383 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 00:50:26.363392 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:26.363402 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:26.363410 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:26.385618 1179266 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1212 00:50:26.385643 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:26.385652 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:26 GMT
	I1212 00:50:26.385658 1179266 round_trippers.go:580]     Audit-Id: 8dddedc9-efd4-466b-99b9-c4dc049fb099
	I1212 00:50:26.385664 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:26.385670 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:26.385676 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:26.385683 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:26.385690 1179266 round_trippers.go:580]     Content-Length: 1273
	I1212 00:50:26.385943 1179266 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"372"},"items":[{"metadata":{"name":"standard","uid":"f5236c4c-8067-49eb-a284-19c262ddf6b6","resourceVersion":"362","creationTimestamp":"2023-12-12T00:50:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T00:50:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 00:50:26.386463 1179266 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f5236c4c-8067-49eb-a284-19c262ddf6b6","resourceVersion":"362","creationTimestamp":"2023-12-12T00:50:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T00:50:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 00:50:26.386516 1179266 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 00:50:26.386531 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:26.386542 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:26.386558 1179266 round_trippers.go:473]     Content-Type: application/json
	I1212 00:50:26.386565 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:26.396358 1179266 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 00:50:26.396395 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:26.396405 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:26.396412 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:26.396418 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:26.396424 1179266 round_trippers.go:580]     Content-Length: 1220
	I1212 00:50:26.396431 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:26 GMT
	I1212 00:50:26.396442 1179266 round_trippers.go:580]     Audit-Id: 6839e61b-c678-45ba-9abf-3dfda39de8a3
	I1212 00:50:26.396448 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:26.396604 1179266 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f5236c4c-8067-49eb-a284-19c262ddf6b6","resourceVersion":"362","creationTimestamp":"2023-12-12T00:50:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T00:50:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 00:50:26.400704 1179266 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:50:26.402689 1179266 addons.go:502] enable addons completed in 1.048924251s: enabled=[storage-provisioner default-storageclass]
	I1212 00:50:26.778370 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:26.778393 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:26.778404 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:26.778411 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:26.780890 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:26.780913 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:26.780922 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:26 GMT
	I1212 00:50:26.780928 1179266 round_trippers.go:580]     Audit-Id: 07f882a9-e908-454e-bffe-9b457f79afb6
	I1212 00:50:26.780935 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:26.780942 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:26.780948 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:26.780959 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:26.781323 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"319","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1212 00:50:27.279014 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:27.279037 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:27.279048 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:27.279055 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:27.281568 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:27.281630 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:27.281650 1179266 round_trippers.go:580]     Audit-Id: 9afb79ed-e44d-4df5-92c5-12decd66b34b
	I1212 00:50:27.281668 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:27.281701 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:27.281721 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:27.281738 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:27.281750 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:27 GMT
	I1212 00:50:27.281864 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"319","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1212 00:50:27.778391 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:27.778465 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:27.778488 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:27.778507 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:27.782176 1179266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:50:27.782201 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:27.782210 1179266 round_trippers.go:580]     Audit-Id: 5caddc7e-5c25-4834-90cc-6f5d44f01090
	I1212 00:50:27.782222 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:27.782228 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:27.782235 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:27.782241 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:27.782247 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:27 GMT
	I1212 00:50:27.782380 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"319","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1212 00:50:27.782771 1179266 node_ready.go:58] node "multinode-270339" has status "Ready":"False"
	I1212 00:50:28.278363 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:28.278385 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:28.278395 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:28.278403 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:28.281012 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:28.281070 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:28.281115 1179266 round_trippers.go:580]     Audit-Id: a72d1ba7-6110-4618-a64d-91d1489c97ca
	I1212 00:50:28.281145 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:28.281164 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:28.281184 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:28.281196 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:28.281203 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:28 GMT
	I1212 00:50:28.281308 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"319","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1212 00:50:28.778285 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:28.778311 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:28.778321 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:28.778329 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:28.780928 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:28.780968 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:28.780976 1179266 round_trippers.go:580]     Audit-Id: b2883a27-03a5-4c94-a456-5529c888fda7
	I1212 00:50:28.780983 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:28.780990 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:28.780996 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:28.781002 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:28.781009 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:28 GMT
	I1212 00:50:28.781138 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:28.781552 1179266 node_ready.go:49] node "multinode-270339" has status "Ready":"True"
	I1212 00:50:28.781572 1179266 node_ready.go:38] duration metric: took 3.178832689s waiting for node "multinode-270339" to be "Ready" ...
	I1212 00:50:28.781582 1179266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:50:28.781648 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:50:28.781659 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:28.781667 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:28.781674 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:28.785259 1179266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:50:28.785287 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:28.785300 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:28 GMT
	I1212 00:50:28.785307 1179266 round_trippers.go:580]     Audit-Id: 18b2728e-1c86-43f4-9a4d-48f45cd8195e
	I1212 00:50:28.785314 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:28.785320 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:28.785327 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:28.785334 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:28.785926 1179266 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"395"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7n4rj","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"16efc97c-281e-4ae4-89a2-7c7507db2e8f","resourceVersion":"394","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"80aaeecd-d2e6-429b-972f-733cb0b597ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80aaeecd-d2e6-429b-972f-733cb0b597ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56434 chars]
	I1212 00:50:28.789856 1179266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7n4rj" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:28.789948 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7n4rj
	I1212 00:50:28.789958 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:28.789966 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:28.789979 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:28.792641 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:28.792660 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:28.792670 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:28.792676 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:28 GMT
	I1212 00:50:28.792682 1179266 round_trippers.go:580]     Audit-Id: b996afaf-cca7-4c34-bf3d-0bde2ff85445
	I1212 00:50:28.792688 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:28.792694 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:28.792700 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:28.792823 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7n4rj","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"16efc97c-281e-4ae4-89a2-7c7507db2e8f","resourceVersion":"394","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"80aaeecd-d2e6-429b-972f-733cb0b597ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80aaeecd-d2e6-429b-972f-733cb0b597ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 00:50:28.793347 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:28.793367 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:28.793376 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:28.793383 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:28.795757 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:28.795780 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:28.795788 1179266 round_trippers.go:580]     Audit-Id: c8f74d95-d9bf-420b-ab14-0ea375bbef40
	I1212 00:50:28.795795 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:28.795801 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:28.795807 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:28.795817 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:28.795828 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:28 GMT
	I1212 00:50:28.795996 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:28.796417 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7n4rj
	I1212 00:50:28.796435 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:28.796444 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:28.796451 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:28.798816 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:28.798837 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:28.798847 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:28 GMT
	I1212 00:50:28.798857 1179266 round_trippers.go:580]     Audit-Id: 65426649-a271-4f8e-9394-03b4c21ce8ea
	I1212 00:50:28.798864 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:28.798883 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:28.798894 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:28.798903 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:28.799307 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7n4rj","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"16efc97c-281e-4ae4-89a2-7c7507db2e8f","resourceVersion":"394","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"80aaeecd-d2e6-429b-972f-733cb0b597ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80aaeecd-d2e6-429b-972f-733cb0b597ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 00:50:28.799842 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:28.799863 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:28.799873 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:28.799883 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:28.802205 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:28.802222 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:28.802230 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:28.802236 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:28.802242 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:28.802258 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:28.802266 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:28 GMT
	I1212 00:50:28.802277 1179266 round_trippers.go:580]     Audit-Id: d2c7a2a5-e6f1-43eb-b7c3-c4e6f70142ca
	I1212 00:50:28.802438 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:29.303565 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7n4rj
	I1212 00:50:29.303587 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:29.303598 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:29.303610 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:29.306147 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:29.306186 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:29.306195 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:29.306201 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:29.306208 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:29.306215 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:29 GMT
	I1212 00:50:29.306224 1179266 round_trippers.go:580]     Audit-Id: baa721fd-55e4-4990-a9f7-256adcd57ce7
	I1212 00:50:29.306231 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:29.306552 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7n4rj","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"16efc97c-281e-4ae4-89a2-7c7507db2e8f","resourceVersion":"394","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"80aaeecd-d2e6-429b-972f-733cb0b597ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80aaeecd-d2e6-429b-972f-733cb0b597ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 00:50:29.307053 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:29.307071 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:29.307080 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:29.307087 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:29.309268 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:29.309290 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:29.309297 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:29.309304 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:29.309311 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:29.309317 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:29.309327 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:29 GMT
	I1212 00:50:29.309339 1179266 round_trippers.go:580]     Audit-Id: 2d7f7488-4e89-4461-a0e9-f1bf2db42ab9
	I1212 00:50:29.309465 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:29.803550 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7n4rj
	I1212 00:50:29.803574 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:29.803584 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:29.803592 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:29.806189 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:29.806218 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:29.806227 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:29.806234 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:29.806241 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:29 GMT
	I1212 00:50:29.806247 1179266 round_trippers.go:580]     Audit-Id: d843e897-bcc7-45b7-8435-77c15a411061
	I1212 00:50:29.806274 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:29.806286 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:29.806426 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7n4rj","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"16efc97c-281e-4ae4-89a2-7c7507db2e8f","resourceVersion":"406","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"80aaeecd-d2e6-429b-972f-733cb0b597ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80aaeecd-d2e6-429b-972f-733cb0b597ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1212 00:50:29.806964 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:29.806983 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:29.806992 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:29.807000 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:29.809304 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:29.809354 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:29.809375 1179266 round_trippers.go:580]     Audit-Id: f72c32e6-c209-4047-8b8c-53daacbada26
	I1212 00:50:29.809397 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:29.809426 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:29.809450 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:29.809476 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:29.809494 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:29 GMT
	I1212 00:50:29.809692 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:29.810103 1179266 pod_ready.go:92] pod "coredns-5dd5756b68-7n4rj" in "kube-system" namespace has status "Ready":"True"
	I1212 00:50:29.810124 1179266 pod_ready.go:81] duration metric: took 1.020237078s waiting for pod "coredns-5dd5756b68-7n4rj" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:29.810135 1179266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:29.810206 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-270339
	I1212 00:50:29.810217 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:29.810225 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:29.810232 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:29.812358 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:29.812379 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:29.812386 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:29.812393 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:29.812399 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:29.812405 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:29.812411 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:29 GMT
	I1212 00:50:29.812418 1179266 round_trippers.go:580]     Audit-Id: c5d25ebc-49f7-4e00-8a53-44eb3f390fef
	I1212 00:50:29.812626 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-270339","namespace":"kube-system","uid":"67c26bc8-7478-40c7-b698-9d505f2d9108","resourceVersion":"295","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.mirror":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.seen":"2023-12-12T00:50:11.291444657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1212 00:50:29.813137 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:29.813153 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:29.813161 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:29.813169 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:29.815226 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:29.815246 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:29.815254 1179266 round_trippers.go:580]     Audit-Id: 3fd12b9d-a29f-439e-a6d6-55f2fb368d7d
	I1212 00:50:29.815260 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:29.815266 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:29.815272 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:29.815278 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:29.815286 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:29 GMT
	I1212 00:50:29.815451 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:29.815890 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-270339
	I1212 00:50:29.815905 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:29.815914 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:29.815921 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:29.818048 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:29.818069 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:29.818077 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:29.818083 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:29.818090 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:29.818097 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:29 GMT
	I1212 00:50:29.818103 1179266 round_trippers.go:580]     Audit-Id: 9857e6a9-ef4d-421c-b4ad-a677d9deba1b
	I1212 00:50:29.818112 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:29.818322 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-270339","namespace":"kube-system","uid":"67c26bc8-7478-40c7-b698-9d505f2d9108","resourceVersion":"295","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.mirror":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.seen":"2023-12-12T00:50:11.291444657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1212 00:50:29.818814 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:29.818830 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:29.818839 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:29.818847 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:29.820880 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:29.820900 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:29.820911 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:29.820918 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:29.820924 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:29.820930 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:29.820940 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:29 GMT
	I1212 00:50:29.820949 1179266 round_trippers.go:580]     Audit-Id: 8ba3c032-c9a1-4563-ad2e-76c73f4544e5
	I1212 00:50:29.821143 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:30.322315 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-270339
	I1212 00:50:30.322340 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:30.322350 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:30.322357 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:30.325052 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:30.325077 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:30.325085 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:30 GMT
	I1212 00:50:30.325092 1179266 round_trippers.go:580]     Audit-Id: be62601d-559a-460e-876c-aa2eabc0a7da
	I1212 00:50:30.325098 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:30.325105 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:30.325111 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:30.325128 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:30.325288 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-270339","namespace":"kube-system","uid":"67c26bc8-7478-40c7-b698-9d505f2d9108","resourceVersion":"295","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.mirror":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.seen":"2023-12-12T00:50:11.291444657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1212 00:50:30.325794 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:30.325810 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:30.325821 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:30.325828 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:30.328187 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:30.328244 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:30.328264 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:30.328286 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:30.328317 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:30 GMT
	I1212 00:50:30.328341 1179266 round_trippers.go:580]     Audit-Id: 596c7327-404d-4bab-a58a-dcdd4889ba24
	I1212 00:50:30.328361 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:30.328393 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:30.328543 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:30.821801 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-270339
	I1212 00:50:30.821826 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:30.821835 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:30.821843 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:30.824346 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:30.824368 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:30.824377 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:30.824383 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:30 GMT
	I1212 00:50:30.824390 1179266 round_trippers.go:580]     Audit-Id: 0e4a30bd-018a-4b88-87a2-a312c6c3a4ca
	I1212 00:50:30.824396 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:30.824403 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:30.824409 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:30.824722 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-270339","namespace":"kube-system","uid":"67c26bc8-7478-40c7-b698-9d505f2d9108","resourceVersion":"295","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.mirror":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.seen":"2023-12-12T00:50:11.291444657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1212 00:50:30.825206 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:30.825231 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:30.825240 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:30.825260 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:30.827477 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:30.827495 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:30.827504 1179266 round_trippers.go:580]     Audit-Id: b5837a16-5f93-4b77-8493-3412ef6c946f
	I1212 00:50:30.827510 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:30.827516 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:30.827522 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:30.827529 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:30.827538 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:30 GMT
	I1212 00:50:30.827741 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:31.321819 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-270339
	I1212 00:50:31.321842 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:31.321853 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:31.321860 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:31.324579 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:31.324656 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:31.324685 1179266 round_trippers.go:580]     Audit-Id: 9a4f7938-8e02-4ce1-81c2-ecdf188062e0
	I1212 00:50:31.324705 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:31.324734 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:31.324759 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:31.324779 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:31.324798 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:31 GMT
	I1212 00:50:31.324918 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-270339","namespace":"kube-system","uid":"67c26bc8-7478-40c7-b698-9d505f2d9108","resourceVersion":"295","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.mirror":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.seen":"2023-12-12T00:50:11.291444657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1212 00:50:31.325438 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:31.325476 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:31.325495 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:31.325504 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:31.327754 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:31.327774 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:31.327787 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:31.327793 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:31.327799 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:31 GMT
	I1212 00:50:31.327810 1179266 round_trippers.go:580]     Audit-Id: 508c10a9-4ac4-40fe-831e-7ac2f28198c3
	I1212 00:50:31.327816 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:31.327823 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:31.328207 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:31.822336 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-270339
	I1212 00:50:31.822362 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:31.822372 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:31.822379 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:31.824717 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:31.824775 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:31.824790 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:31.824798 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:31.824804 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:31 GMT
	I1212 00:50:31.824815 1179266 round_trippers.go:580]     Audit-Id: 0e2f5399-1a0b-4867-ac0f-347858421bb3
	I1212 00:50:31.824821 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:31.824831 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:31.825194 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-270339","namespace":"kube-system","uid":"67c26bc8-7478-40c7-b698-9d505f2d9108","resourceVersion":"416","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.mirror":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.seen":"2023-12-12T00:50:11.291444657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1212 00:50:31.825707 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:31.825723 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:31.825731 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:31.825746 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:31.827852 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:31.827911 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:31.827932 1179266 round_trippers.go:580]     Audit-Id: dc7b60b0-722b-42ff-a3a2-b383397c79ab
	I1212 00:50:31.827950 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:31.827981 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:31.828002 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:31.828020 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:31.828040 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:31 GMT
	I1212 00:50:31.828185 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:31.828585 1179266 pod_ready.go:92] pod "etcd-multinode-270339" in "kube-system" namespace has status "Ready":"True"
	I1212 00:50:31.828605 1179266 pod_ready.go:81] duration metric: took 2.01846207s waiting for pod "etcd-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:31.828618 1179266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:31.828670 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-270339
	I1212 00:50:31.828681 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:31.828690 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:31.828697 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:31.830803 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:31.830819 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:31.830827 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:31.830833 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:31.830839 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:31.830846 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:31 GMT
	I1212 00:50:31.830852 1179266 round_trippers.go:580]     Audit-Id: 6274f819-08cf-474e-b75f-a68c8746ce21
	I1212 00:50:31.830858 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:31.831074 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-270339","namespace":"kube-system","uid":"69200dae-a0c0-4c7a-8615-5b242ea522fa","resourceVersion":"413","creationTimestamp":"2023-12-12T00:50:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"85f57a6e9d4169ea7d73b4da63f983eb","kubernetes.io/config.mirror":"85f57a6e9d4169ea7d73b4da63f983eb","kubernetes.io/config.seen":"2023-12-12T00:50:02.864494058Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1212 00:50:31.831582 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:31.831602 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:31.831610 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:31.831617 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:31.833469 1179266 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:50:31.833489 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:31.833497 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:31 GMT
	I1212 00:50:31.833503 1179266 round_trippers.go:580]     Audit-Id: 64a2c246-587e-4f52-9f1b-eb35ab946d84
	I1212 00:50:31.833510 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:31.833516 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:31.833525 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:31.833534 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:31.833785 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:31.834149 1179266 pod_ready.go:92] pod "kube-apiserver-multinode-270339" in "kube-system" namespace has status "Ready":"True"
	I1212 00:50:31.834170 1179266 pod_ready.go:81] duration metric: took 5.544307ms waiting for pod "kube-apiserver-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:31.834186 1179266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:31.834243 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-270339
	I1212 00:50:31.834253 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:31.834261 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:31.834267 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:31.836208 1179266 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:50:31.836224 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:31.836231 1179266 round_trippers.go:580]     Audit-Id: 22c4d2e4-f5a5-44e4-a6d0-5eaba7388794
	I1212 00:50:31.836238 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:31.836244 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:31.836250 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:31.836257 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:31.836263 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:31 GMT
	I1212 00:50:31.836433 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-270339","namespace":"kube-system","uid":"71e4b30c-7581-45a1-a740-05dd4689cd8d","resourceVersion":"414","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"69899ce532b3e555ace218f70ebfd6a0","kubernetes.io/config.mirror":"69899ce532b3e555ace218f70ebfd6a0","kubernetes.io/config.seen":"2023-12-12T00:50:11.291446897Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1212 00:50:31.979177 1179266 request.go:629] Waited for 142.270957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:31.979254 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:31.979264 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:31.979273 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:31.979281 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:31.981681 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:31.981746 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:31.981762 1179266 round_trippers.go:580]     Audit-Id: 37058ddb-1f77-4969-81df-bde548fd8aae
	I1212 00:50:31.981770 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:31.981780 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:31.981787 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:31.981793 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:31.981802 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:31 GMT
	I1212 00:50:31.981960 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:31.982414 1179266 pod_ready.go:92] pod "kube-controller-manager-multinode-270339" in "kube-system" namespace has status "Ready":"True"
	I1212 00:50:31.982435 1179266 pod_ready.go:81] duration metric: took 148.236831ms waiting for pod "kube-controller-manager-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:31.982451 1179266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ff2v2" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:32.178863 1179266 request.go:629] Waited for 196.340453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff2v2
	I1212 00:50:32.178929 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff2v2
	I1212 00:50:32.178939 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:32.178948 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:32.178956 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:32.181493 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:32.181526 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:32.181534 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:32.181541 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:32 GMT
	I1212 00:50:32.181547 1179266 round_trippers.go:580]     Audit-Id: 205e4aa8-67fd-421a-81d4-2d6c5f832f9e
	I1212 00:50:32.181554 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:32.181560 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:32.181570 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:32.181792 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ff2v2","generateName":"kube-proxy-","namespace":"kube-system","uid":"e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf","resourceVersion":"384","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9309196b-e55a-4237-bd1a-ef2d9d1fc1f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9309196b-e55a-4237-bd1a-ef2d9d1fc1f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1212 00:50:32.378894 1179266 request.go:629] Waited for 196.614002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:32.378963 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:32.378971 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:32.378980 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:32.378987 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:32.381447 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:32.381471 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:32.381480 1179266 round_trippers.go:580]     Audit-Id: c190748c-5eca-4581-be61-2670f78c37d3
	I1212 00:50:32.381494 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:32.381501 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:32.381507 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:32.381513 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:32.381524 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:32 GMT
	I1212 00:50:32.381660 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:32.382045 1179266 pod_ready.go:92] pod "kube-proxy-ff2v2" in "kube-system" namespace has status "Ready":"True"
	I1212 00:50:32.382063 1179266 pod_ready.go:81] duration metric: took 399.599463ms waiting for pod "kube-proxy-ff2v2" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:32.382073 1179266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:32.579332 1179266 request.go:629] Waited for 197.197493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-270339
	I1212 00:50:32.579393 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-270339
	I1212 00:50:32.579402 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:32.579430 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:32.579443 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:32.581921 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:32.581947 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:32.581955 1179266 round_trippers.go:580]     Audit-Id: 7e8e783a-370f-46d0-a38e-cc185ef59052
	I1212 00:50:32.581962 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:32.581968 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:32.581975 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:32.581981 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:32.581994 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:32 GMT
	I1212 00:50:32.582123 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-270339","namespace":"kube-system","uid":"2d786047-a353-4b4c-94da-4b02d6191903","resourceVersion":"415","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ba500f988f5556d7475d9e018b4ec918","kubernetes.io/config.mirror":"ba500f988f5556d7475d9e018b4ec918","kubernetes.io/config.seen":"2023-12-12T00:50:11.291439472Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1212 00:50:32.778872 1179266 request.go:629] Waited for 196.321467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:32.778948 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:50:32.778957 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:32.778966 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:32.778973 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:32.781396 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:32.781478 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:32.781496 1179266 round_trippers.go:580]     Audit-Id: d327d902-fc3b-4b36-abb9-6f23e90a15cd
	I1212 00:50:32.781503 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:32.781509 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:32.781531 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:32.781544 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:32.781550 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:32 GMT
	I1212 00:50:32.781658 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:50:32.782062 1179266 pod_ready.go:92] pod "kube-scheduler-multinode-270339" in "kube-system" namespace has status "Ready":"True"
	I1212 00:50:32.782080 1179266 pod_ready.go:81] duration metric: took 399.999541ms waiting for pod "kube-scheduler-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:50:32.782093 1179266 pod_ready.go:38] duration metric: took 4.000495233s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:50:32.782112 1179266 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:50:32.782182 1179266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:50:32.793955 1179266 command_runner.go:130] > 1225
	I1212 00:50:32.795338 1179266 api_server.go:72] duration metric: took 7.2287213s to wait for apiserver process to appear ...
	I1212 00:50:32.795360 1179266 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:50:32.795400 1179266 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1212 00:50:32.804809 1179266 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1212 00:50:32.804899 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1212 00:50:32.804912 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:32.804922 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:32.804948 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:32.806170 1179266 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:50:32.806188 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:32.806197 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:32 GMT
	I1212 00:50:32.806223 1179266 round_trippers.go:580]     Audit-Id: 9534817a-3a19-434f-a216-b1dee8ad0d07
	I1212 00:50:32.806238 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:32.806246 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:32.806254 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:32.806261 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:32.806269 1179266 round_trippers.go:580]     Content-Length: 264
	I1212 00:50:32.806304 1179266 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1212 00:50:32.806395 1179266 api_server.go:141] control plane version: v1.28.4
	I1212 00:50:32.806412 1179266 api_server.go:131] duration metric: took 11.045482ms to wait for apiserver health ...
	I1212 00:50:32.806420 1179266 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:50:32.978801 1179266 request.go:629] Waited for 172.307156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:50:32.978922 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:50:32.978938 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:32.978947 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:32.978957 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:32.982529 1179266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:50:32.982553 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:32.982562 1179266 round_trippers.go:580]     Audit-Id: fbabea84-f835-4e93-bbb5-c4d881d6d574
	I1212 00:50:32.982569 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:32.982575 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:32.982581 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:32.982588 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:32.982594 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:32 GMT
	I1212 00:50:32.983579 1179266 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7n4rj","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"16efc97c-281e-4ae4-89a2-7c7507db2e8f","resourceVersion":"406","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"80aaeecd-d2e6-429b-972f-733cb0b597ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80aaeecd-d2e6-429b-972f-733cb0b597ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1212 00:50:32.985902 1179266 system_pods.go:59] 8 kube-system pods found
	I1212 00:50:32.985943 1179266 system_pods.go:61] "coredns-5dd5756b68-7n4rj" [16efc97c-281e-4ae4-89a2-7c7507db2e8f] Running
	I1212 00:50:32.985954 1179266 system_pods.go:61] "etcd-multinode-270339" [67c26bc8-7478-40c7-b698-9d505f2d9108] Running
	I1212 00:50:32.985962 1179266 system_pods.go:61] "kindnet-529wf" [c92bbff9-fd78-417d-844a-71166788153a] Running
	I1212 00:50:32.985969 1179266 system_pods.go:61] "kube-apiserver-multinode-270339" [69200dae-a0c0-4c7a-8615-5b242ea522fa] Running
	I1212 00:50:32.985981 1179266 system_pods.go:61] "kube-controller-manager-multinode-270339" [71e4b30c-7581-45a1-a740-05dd4689cd8d] Running
	I1212 00:50:32.985986 1179266 system_pods.go:61] "kube-proxy-ff2v2" [e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf] Running
	I1212 00:50:32.985991 1179266 system_pods.go:61] "kube-scheduler-multinode-270339" [2d786047-a353-4b4c-94da-4b02d6191903] Running
	I1212 00:50:32.985996 1179266 system_pods.go:61] "storage-provisioner" [667961d7-7931-4d8a-8b56-5d72e1687ab3] Running
	I1212 00:50:32.986002 1179266 system_pods.go:74] duration metric: took 179.576515ms to wait for pod list to return data ...
	I1212 00:50:32.986011 1179266 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:50:33.178356 1179266 request.go:629] Waited for 192.273064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:50:33.178514 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:50:33.178525 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:33.178534 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:33.178541 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:33.181105 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:33.181172 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:33.181185 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:33.181192 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:33.181199 1179266 round_trippers.go:580]     Content-Length: 261
	I1212 00:50:33.181205 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:33 GMT
	I1212 00:50:33.181211 1179266 round_trippers.go:580]     Audit-Id: a4850d7b-e31c-404e-852e-04b5dbbd8719
	I1212 00:50:33.181221 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:33.181227 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:33.181278 1179266 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"703c7e2d-5a69-4b75-9440-772f348e405f","resourceVersion":"322","creationTimestamp":"2023-12-12T00:50:25Z"}}]}
	I1212 00:50:33.181486 1179266 default_sa.go:45] found service account: "default"
	I1212 00:50:33.181507 1179266 default_sa.go:55] duration metric: took 195.483387ms for default service account to be created ...
	I1212 00:50:33.181516 1179266 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:50:33.378883 1179266 request.go:629] Waited for 197.303443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:50:33.378944 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:50:33.378953 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:33.378962 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:33.378973 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:33.382376 1179266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:50:33.382402 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:33.382411 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:33.382418 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:33.382424 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:33 GMT
	I1212 00:50:33.382431 1179266 round_trippers.go:580]     Audit-Id: 096d7fc5-2cb0-45c8-9ad6-ff98708440f7
	I1212 00:50:33.382441 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:33.382453 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:33.383204 1179266 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7n4rj","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"16efc97c-281e-4ae4-89a2-7c7507db2e8f","resourceVersion":"406","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"80aaeecd-d2e6-429b-972f-733cb0b597ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80aaeecd-d2e6-429b-972f-733cb0b597ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1212 00:50:33.385597 1179266 system_pods.go:86] 8 kube-system pods found
	I1212 00:50:33.385624 1179266 system_pods.go:89] "coredns-5dd5756b68-7n4rj" [16efc97c-281e-4ae4-89a2-7c7507db2e8f] Running
	I1212 00:50:33.385631 1179266 system_pods.go:89] "etcd-multinode-270339" [67c26bc8-7478-40c7-b698-9d505f2d9108] Running
	I1212 00:50:33.385636 1179266 system_pods.go:89] "kindnet-529wf" [c92bbff9-fd78-417d-844a-71166788153a] Running
	I1212 00:50:33.385643 1179266 system_pods.go:89] "kube-apiserver-multinode-270339" [69200dae-a0c0-4c7a-8615-5b242ea522fa] Running
	I1212 00:50:33.385648 1179266 system_pods.go:89] "kube-controller-manager-multinode-270339" [71e4b30c-7581-45a1-a740-05dd4689cd8d] Running
	I1212 00:50:33.385653 1179266 system_pods.go:89] "kube-proxy-ff2v2" [e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf] Running
	I1212 00:50:33.385658 1179266 system_pods.go:89] "kube-scheduler-multinode-270339" [2d786047-a353-4b4c-94da-4b02d6191903] Running
	I1212 00:50:33.385664 1179266 system_pods.go:89] "storage-provisioner" [667961d7-7931-4d8a-8b56-5d72e1687ab3] Running
	I1212 00:50:33.385672 1179266 system_pods.go:126] duration metric: took 204.149619ms to wait for k8s-apps to be running ...
	I1212 00:50:33.385683 1179266 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:50:33.385741 1179266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:50:33.399541 1179266 system_svc.go:56] duration metric: took 13.84803ms WaitForService to wait for kubelet.
	I1212 00:50:33.399612 1179266 kubeadm.go:581] duration metric: took 7.832998505s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:50:33.399645 1179266 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:50:33.579092 1179266 request.go:629] Waited for 179.337072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1212 00:50:33.579162 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1212 00:50:33.579179 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:33.579191 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:33.579201 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:33.581688 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:33.581711 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:33.581719 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:33.581725 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:33.581732 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:33 GMT
	I1212 00:50:33.581738 1179266 round_trippers.go:580]     Audit-Id: eded2220-8134-48a0-8053-453aa40e4baa
	I1212 00:50:33.581745 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:33.581755 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:33.581938 1179266 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1212 00:50:33.582392 1179266 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:50:33.582435 1179266 node_conditions.go:123] node cpu capacity is 2
	I1212 00:50:33.582447 1179266 node_conditions.go:105] duration metric: took 182.785779ms to run NodePressure ...
	I1212 00:50:33.582464 1179266 start.go:228] waiting for startup goroutines ...
	I1212 00:50:33.582474 1179266 start.go:233] waiting for cluster config update ...
	I1212 00:50:33.582485 1179266 start.go:242] writing updated cluster config ...
	I1212 00:50:33.585335 1179266 out.go:177] 
	I1212 00:50:33.587701 1179266 config.go:182] Loaded profile config "multinode-270339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:50:33.587789 1179266 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/config.json ...
	I1212 00:50:33.590741 1179266 out.go:177] * Starting worker node multinode-270339-m02 in cluster multinode-270339
	I1212 00:50:33.592476 1179266 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 00:50:33.594572 1179266 out.go:177] * Pulling base image ...
	I1212 00:50:33.597578 1179266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:50:33.597610 1179266 cache.go:56] Caching tarball of preloaded images
	I1212 00:50:33.597678 1179266 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:50:33.597715 1179266 preload.go:174] Found /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:50:33.597728 1179266 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 00:50:33.597826 1179266 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/config.json ...
	I1212 00:50:33.615154 1179266 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon, skipping pull
	I1212 00:50:33.615180 1179266 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in daemon, skipping load
	I1212 00:50:33.615205 1179266 cache.go:194] Successfully downloaded all kic artifacts
	I1212 00:50:33.615234 1179266 start.go:365] acquiring machines lock for multinode-270339-m02: {Name:mk838ee1932a880f642c9494a626a11aaaf29c1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:50:33.615354 1179266 start.go:369] acquired machines lock for "multinode-270339-m02" in 103.094µs
	I1212 00:50:33.615379 1179266 start.go:93] Provisioning new machine with config: &{Name:multinode-270339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-270339 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 00:50:33.615458 1179266 start.go:125] createHost starting for "m02" (driver="docker")
	I1212 00:50:33.619102 1179266 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1212 00:50:33.619224 1179266 start.go:159] libmachine.API.Create for "multinode-270339" (driver="docker")
	I1212 00:50:33.619252 1179266 client.go:168] LocalClient.Create starting
	I1212 00:50:33.619319 1179266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem
	I1212 00:50:33.619353 1179266 main.go:141] libmachine: Decoding PEM data...
	I1212 00:50:33.619369 1179266 main.go:141] libmachine: Parsing certificate...
	I1212 00:50:33.619434 1179266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem
	I1212 00:50:33.619464 1179266 main.go:141] libmachine: Decoding PEM data...
	I1212 00:50:33.619479 1179266 main.go:141] libmachine: Parsing certificate...
	I1212 00:50:33.619745 1179266 cli_runner.go:164] Run: docker network inspect multinode-270339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:50:33.637767 1179266 network_create.go:77] Found existing network {name:multinode-270339 subnet:0x40029aced0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1212 00:50:33.637814 1179266 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-270339-m02" container
	I1212 00:50:33.637892 1179266 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:50:33.655028 1179266 cli_runner.go:164] Run: docker volume create multinode-270339-m02 --label name.minikube.sigs.k8s.io=multinode-270339-m02 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:50:33.674243 1179266 oci.go:103] Successfully created a docker volume multinode-270339-m02
	I1212 00:50:33.674332 1179266 cli_runner.go:164] Run: docker run --rm --name multinode-270339-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-270339-m02 --entrypoint /usr/bin/test -v multinode-270339-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -d /var/lib
	I1212 00:50:34.258144 1179266 oci.go:107] Successfully prepared a docker volume multinode-270339-m02
	I1212 00:50:34.258186 1179266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:50:34.258207 1179266 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:50:34.258291 1179266 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-270339-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:50:38.540550 1179266 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-270339-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 -I lz4 -xf /preloaded.tar -C /extractDir: (4.282198863s)
	I1212 00:50:38.540581 1179266 kic.go:203] duration metric: took 4.282373 seconds to extract preloaded images to volume
	W1212 00:50:38.540724 1179266 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:50:38.540832 1179266 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:50:38.605493 1179266 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-270339-m02 --name multinode-270339-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-270339-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-270339-m02 --network multinode-270339 --ip 192.168.58.3 --volume multinode-270339-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 00:50:38.968718 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339-m02 --format={{.State.Running}}
	I1212 00:50:38.992517 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339-m02 --format={{.State.Status}}
	I1212 00:50:39.019957 1179266 cli_runner.go:164] Run: docker exec multinode-270339-m02 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:50:39.098759 1179266 oci.go:144] the created container "multinode-270339-m02" has a running status.
	I1212 00:50:39.098787 1179266 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339-m02/id_rsa...
	I1212 00:50:39.916944 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 00:50:39.917035 1179266 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:50:39.946811 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339-m02 --format={{.State.Status}}
	I1212 00:50:39.977386 1179266 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:50:39.977406 1179266 kic_runner.go:114] Args: [docker exec --privileged multinode-270339-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:50:40.057071 1179266 cli_runner.go:164] Run: docker container inspect multinode-270339-m02 --format={{.State.Status}}
	I1212 00:50:40.094869 1179266 machine.go:88] provisioning docker machine ...
	I1212 00:50:40.094903 1179266 ubuntu.go:169] provisioning hostname "multinode-270339-m02"
	I1212 00:50:40.094974 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339-m02
	I1212 00:50:40.120834 1179266 main.go:141] libmachine: Using SSH client type: native
	I1212 00:50:40.121276 1179266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34090 <nil> <nil>}
	I1212 00:50:40.121290 1179266 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-270339-m02 && echo "multinode-270339-m02" | sudo tee /etc/hostname
	I1212 00:50:40.295798 1179266 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-270339-m02
	
	I1212 00:50:40.295879 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339-m02
	I1212 00:50:40.324748 1179266 main.go:141] libmachine: Using SSH client type: native
	I1212 00:50:40.325154 1179266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34090 <nil> <nil>}
	I1212 00:50:40.325176 1179266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-270339-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-270339-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-270339-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:50:40.466277 1179266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:50:40.466304 1179266 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 00:50:40.466320 1179266 ubuntu.go:177] setting up certificates
	I1212 00:50:40.466331 1179266 provision.go:83] configureAuth start
	I1212 00:50:40.466395 1179266 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-270339-m02
	I1212 00:50:40.486378 1179266 provision.go:138] copyHostCerts
	I1212 00:50:40.486420 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 00:50:40.486451 1179266 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem, removing ...
	I1212 00:50:40.486461 1179266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 00:50:40.486537 1179266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 00:50:40.486618 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 00:50:40.486642 1179266 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem, removing ...
	I1212 00:50:40.486651 1179266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 00:50:40.486678 1179266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 00:50:40.486722 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 00:50:40.486743 1179266 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem, removing ...
	I1212 00:50:40.486751 1179266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 00:50:40.486776 1179266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 00:50:40.486824 1179266 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.multinode-270339-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-270339-m02]
	I1212 00:50:40.770330 1179266 provision.go:172] copyRemoteCerts
	I1212 00:50:40.770402 1179266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:50:40.770450 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339-m02
	I1212 00:50:40.788843 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34090 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339-m02/id_rsa Username:docker}
	I1212 00:50:40.887615 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:50:40.887687 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:50:40.915702 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:50:40.915811 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 00:50:40.945890 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:50:40.945953 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:50:40.974271 1179266 provision.go:86] duration metric: configureAuth took 507.926235ms
	I1212 00:50:40.974297 1179266 ubuntu.go:193] setting minikube options for container-runtime
	I1212 00:50:40.974487 1179266 config.go:182] Loaded profile config "multinode-270339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:50:40.974586 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339-m02
	I1212 00:50:40.992855 1179266 main.go:141] libmachine: Using SSH client type: native
	I1212 00:50:40.993295 1179266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34090 <nil> <nil>}
	I1212 00:50:40.993311 1179266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:50:41.248518 1179266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:50:41.248589 1179266 machine.go:91] provisioned docker machine in 1.153685804s
	I1212 00:50:41.248612 1179266 client.go:171] LocalClient.Create took 7.629353825s
	I1212 00:50:41.248659 1179266 start.go:167] duration metric: libmachine.API.Create for "multinode-270339" took 7.629435201s
	I1212 00:50:41.248682 1179266 start.go:300] post-start starting for "multinode-270339-m02" (driver="docker")
	I1212 00:50:41.248704 1179266 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:50:41.248792 1179266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:50:41.248862 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339-m02
	I1212 00:50:41.272506 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34090 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339-m02/id_rsa Username:docker}
	I1212 00:50:41.376411 1179266 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:50:41.380431 1179266 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1212 00:50:41.380452 1179266 command_runner.go:130] > NAME="Ubuntu"
	I1212 00:50:41.380459 1179266 command_runner.go:130] > VERSION_ID="22.04"
	I1212 00:50:41.380470 1179266 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1212 00:50:41.380476 1179266 command_runner.go:130] > VERSION_CODENAME=jammy
	I1212 00:50:41.380481 1179266 command_runner.go:130] > ID=ubuntu
	I1212 00:50:41.380486 1179266 command_runner.go:130] > ID_LIKE=debian
	I1212 00:50:41.380491 1179266 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1212 00:50:41.380497 1179266 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1212 00:50:41.380504 1179266 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1212 00:50:41.380523 1179266 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1212 00:50:41.380530 1179266 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1212 00:50:41.380575 1179266 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:50:41.380598 1179266 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 00:50:41.380608 1179266 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 00:50:41.380614 1179266 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 00:50:41.380625 1179266 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 00:50:41.380681 1179266 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 00:50:41.380757 1179266 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> 11173832.pem in /etc/ssl/certs
	I1212 00:50:41.380764 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> /etc/ssl/certs/11173832.pem
	I1212 00:50:41.380860 1179266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:50:41.391423 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 00:50:41.420369 1179266 start.go:303] post-start completed in 171.661754ms
	I1212 00:50:41.420740 1179266 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-270339-m02
	I1212 00:50:41.439055 1179266 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/config.json ...
	I1212 00:50:41.439343 1179266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:50:41.439396 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339-m02
	I1212 00:50:41.457421 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34090 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339-m02/id_rsa Username:docker}
	I1212 00:50:41.559958 1179266 command_runner.go:130] > 12%!
	(MISSING)I1212 00:50:41.560427 1179266 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:50:41.566932 1179266 command_runner.go:130] > 171G
	I1212 00:50:41.567605 1179266 start.go:128] duration metric: createHost completed in 7.952134039s
	I1212 00:50:41.567626 1179266 start.go:83] releasing machines lock for "multinode-270339-m02", held for 7.952263603s
	I1212 00:50:41.567699 1179266 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-270339-m02
	I1212 00:50:41.589959 1179266 out.go:177] * Found network options:
	I1212 00:50:41.591996 1179266 out.go:177]   - NO_PROXY=192.168.58.2
	W1212 00:50:41.594145 1179266 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 00:50:41.594209 1179266 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:50:41.594289 1179266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:50:41.594340 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339-m02
	I1212 00:50:41.594602 1179266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:50:41.594651 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339-m02
	I1212 00:50:41.614161 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34090 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339-m02/id_rsa Username:docker}
	I1212 00:50:41.616200 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34090 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339-m02/id_rsa Username:docker}
	I1212 00:50:41.850007 1179266 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:50:41.875180 1179266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:50:41.880681 1179266 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1212 00:50:41.880706 1179266 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1212 00:50:41.880715 1179266 command_runner.go:130] > Device: b3h/179d	Inode: 1568803     Links: 1
	I1212 00:50:41.880723 1179266 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:50:41.880737 1179266 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1212 00:50:41.880746 1179266 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1212 00:50:41.880752 1179266 command_runner.go:130] > Change: 2023-12-12 00:11:51.073557127 +0000
	I1212 00:50:41.880760 1179266 command_runner.go:130] >  Birth: 2023-12-12 00:11:51.073557127 +0000
	I1212 00:50:41.880828 1179266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:50:41.904313 1179266 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 00:50:41.904407 1179266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:50:41.946103 1179266 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1212 00:50:41.946157 1179266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 00:50:41.946192 1179266 start.go:475] detecting cgroup driver to use...
	I1212 00:50:41.946234 1179266 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 00:50:41.946331 1179266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:50:41.966762 1179266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:50:41.981692 1179266 docker.go:203] disabling cri-docker service (if available) ...
	I1212 00:50:41.981760 1179266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:50:42.000197 1179266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:50:42.019474 1179266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:50:42.134137 1179266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:50:42.152600 1179266 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 00:50:42.266737 1179266 docker.go:219] disabling docker service ...
	I1212 00:50:42.266806 1179266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:50:42.297226 1179266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:50:42.313233 1179266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:50:42.415935 1179266 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 00:50:42.416055 1179266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:50:42.527153 1179266 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 00:50:42.527250 1179266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:50:42.547580 1179266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:50:42.566172 1179266 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 00:50:42.567518 1179266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 00:50:42.567612 1179266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:50:42.580179 1179266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:50:42.580274 1179266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:50:42.592514 1179266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:50:42.604674 1179266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:50:42.617593 1179266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:50:42.629776 1179266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:50:42.639217 1179266 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:50:42.640521 1179266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:50:42.650868 1179266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:50:42.750891 1179266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:50:42.888026 1179266 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:50:42.888096 1179266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:50:42.892481 1179266 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 00:50:42.892502 1179266 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:50:42.892510 1179266 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1212 00:50:42.892518 1179266 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:50:42.892525 1179266 command_runner.go:130] > Access: 2023-12-12 00:50:42.871797458 +0000
	I1212 00:50:42.892532 1179266 command_runner.go:130] > Modify: 2023-12-12 00:50:42.871797458 +0000
	I1212 00:50:42.892545 1179266 command_runner.go:130] > Change: 2023-12-12 00:50:42.871797458 +0000
	I1212 00:50:42.892556 1179266 command_runner.go:130] >  Birth: -
	I1212 00:50:42.892772 1179266 start.go:543] Will wait 60s for crictl version
	I1212 00:50:42.892824 1179266 ssh_runner.go:195] Run: which crictl
	I1212 00:50:42.896660 1179266 command_runner.go:130] > /usr/bin/crictl
	I1212 00:50:42.897026 1179266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:50:42.942320 1179266 command_runner.go:130] > Version:  0.1.0
	I1212 00:50:42.942343 1179266 command_runner.go:130] > RuntimeName:  cri-o
	I1212 00:50:42.942349 1179266 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1212 00:50:42.942355 1179266 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:50:42.944764 1179266 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 00:50:42.944844 1179266 ssh_runner.go:195] Run: crio --version
	I1212 00:50:42.994693 1179266 command_runner.go:130] > crio version 1.24.6
	I1212 00:50:42.994715 1179266 command_runner.go:130] > Version:          1.24.6
	I1212 00:50:42.994724 1179266 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1212 00:50:42.994729 1179266 command_runner.go:130] > GitTreeState:     clean
	I1212 00:50:42.994736 1179266 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1212 00:50:42.994742 1179266 command_runner.go:130] > GoVersion:        go1.18.2
	I1212 00:50:42.994747 1179266 command_runner.go:130] > Compiler:         gc
	I1212 00:50:42.994752 1179266 command_runner.go:130] > Platform:         linux/arm64
	I1212 00:50:42.994759 1179266 command_runner.go:130] > Linkmode:         dynamic
	I1212 00:50:42.994778 1179266 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 00:50:42.994786 1179266 command_runner.go:130] > SeccompEnabled:   true
	I1212 00:50:42.994792 1179266 command_runner.go:130] > AppArmorEnabled:  false
	I1212 00:50:42.997167 1179266 ssh_runner.go:195] Run: crio --version
	I1212 00:50:43.051434 1179266 command_runner.go:130] > crio version 1.24.6
	I1212 00:50:43.051458 1179266 command_runner.go:130] > Version:          1.24.6
	I1212 00:50:43.051474 1179266 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1212 00:50:43.051479 1179266 command_runner.go:130] > GitTreeState:     clean
	I1212 00:50:43.051486 1179266 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1212 00:50:43.051493 1179266 command_runner.go:130] > GoVersion:        go1.18.2
	I1212 00:50:43.051498 1179266 command_runner.go:130] > Compiler:         gc
	I1212 00:50:43.051504 1179266 command_runner.go:130] > Platform:         linux/arm64
	I1212 00:50:43.051513 1179266 command_runner.go:130] > Linkmode:         dynamic
	I1212 00:50:43.051523 1179266 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 00:50:43.051531 1179266 command_runner.go:130] > SeccompEnabled:   true
	I1212 00:50:43.051537 1179266 command_runner.go:130] > AppArmorEnabled:  false
	I1212 00:50:43.055686 1179266 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1212 00:50:43.057563 1179266 out.go:177]   - env NO_PROXY=192.168.58.2
	I1212 00:50:43.059817 1179266 cli_runner.go:164] Run: docker network inspect multinode-270339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:50:43.077439 1179266 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1212 00:50:43.082238 1179266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:50:43.095358 1179266 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339 for IP: 192.168.58.3
	I1212 00:50:43.095390 1179266 certs.go:190] acquiring lock for shared ca certs: {Name:mk50788b4819ee46b65351495e43cdf246a6ddce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:50:43.095530 1179266 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key
	I1212 00:50:43.095574 1179266 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key
	I1212 00:50:43.095588 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:50:43.095602 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:50:43.095617 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:50:43.095631 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:50:43.095685 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem (1338 bytes)
	W1212 00:50:43.095717 1179266 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383_empty.pem, impossibly tiny 0 bytes
	I1212 00:50:43.095730 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:50:43.095759 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:50:43.095786 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:50:43.095814 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem (1679 bytes)
	I1212 00:50:43.095861 1179266 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 00:50:43.095893 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:50:43.095909 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem -> /usr/share/ca-certificates/1117383.pem
	I1212 00:50:43.095924 1179266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> /usr/share/ca-certificates/11173832.pem
	I1212 00:50:43.096308 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:50:43.124628 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:50:43.153312 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:50:43.181324 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:50:43.210072 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:50:43.238410 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/1117383.pem --> /usr/share/ca-certificates/1117383.pem (1338 bytes)
	I1212 00:50:43.267494 1179266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /usr/share/ca-certificates/11173832.pem (1708 bytes)
	I1212 00:50:43.304742 1179266 ssh_runner.go:195] Run: openssl version
	I1212 00:50:43.311268 1179266 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1212 00:50:43.311645 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:50:43.323247 1179266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:50:43.327573 1179266 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:50:43.327660 1179266 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:50:43.327743 1179266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:50:43.335810 1179266 command_runner.go:130] > b5213941
	I1212 00:50:43.336989 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:50:43.348992 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117383.pem && ln -fs /usr/share/ca-certificates/1117383.pem /etc/ssl/certs/1117383.pem"
	I1212 00:50:43.360421 1179266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117383.pem
	I1212 00:50:43.364997 1179266 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:25 /usr/share/ca-certificates/1117383.pem
	I1212 00:50:43.365092 1179266 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:25 /usr/share/ca-certificates/1117383.pem
	I1212 00:50:43.365166 1179266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117383.pem
	I1212 00:50:43.373801 1179266 command_runner.go:130] > 51391683
	I1212 00:50:43.373952 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117383.pem /etc/ssl/certs/51391683.0"
	I1212 00:50:43.385582 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11173832.pem && ln -fs /usr/share/ca-certificates/11173832.pem /etc/ssl/certs/11173832.pem"
	I1212 00:50:43.397028 1179266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11173832.pem
	I1212 00:50:43.401631 1179266 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:25 /usr/share/ca-certificates/11173832.pem
	I1212 00:50:43.401993 1179266 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:25 /usr/share/ca-certificates/11173832.pem
	I1212 00:50:43.402057 1179266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11173832.pem
	I1212 00:50:43.410440 1179266 command_runner.go:130] > 3ec20f2e
	I1212 00:50:43.410529 1179266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11173832.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:50:43.422287 1179266 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:50:43.426535 1179266 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 00:50:43.426605 1179266 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 00:50:43.426715 1179266 ssh_runner.go:195] Run: crio config
	I1212 00:50:43.481153 1179266 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 00:50:43.481179 1179266 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 00:50:43.481188 1179266 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 00:50:43.481192 1179266 command_runner.go:130] > #
	I1212 00:50:43.481207 1179266 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 00:50:43.481215 1179266 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 00:50:43.481223 1179266 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 00:50:43.481233 1179266 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 00:50:43.481238 1179266 command_runner.go:130] > # reload'.
	I1212 00:50:43.481258 1179266 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 00:50:43.481267 1179266 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 00:50:43.481279 1179266 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 00:50:43.481287 1179266 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 00:50:43.481292 1179266 command_runner.go:130] > [crio]
	I1212 00:50:43.481306 1179266 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 00:50:43.481312 1179266 command_runner.go:130] > # containers images, in this directory.
	I1212 00:50:43.481515 1179266 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 00:50:43.481532 1179266 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 00:50:43.481756 1179266 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1212 00:50:43.481773 1179266 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 00:50:43.481782 1179266 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 00:50:43.481788 1179266 command_runner.go:130] > # storage_driver = "vfs"
	I1212 00:50:43.481795 1179266 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 00:50:43.481804 1179266 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 00:50:43.481810 1179266 command_runner.go:130] > # storage_option = [
	I1212 00:50:43.481814 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.481823 1179266 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 00:50:43.481834 1179266 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 00:50:43.481841 1179266 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 00:50:43.481858 1179266 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 00:50:43.481866 1179266 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 00:50:43.481872 1179266 command_runner.go:130] > # always happen on a node reboot
	I1212 00:50:43.481879 1179266 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 00:50:43.481888 1179266 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 00:50:43.481895 1179266 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 00:50:43.481911 1179266 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 00:50:43.481923 1179266 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 00:50:43.481932 1179266 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 00:50:43.481942 1179266 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 00:50:43.481952 1179266 command_runner.go:130] > # internal_wipe = true
	I1212 00:50:43.481959 1179266 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 00:50:43.481977 1179266 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 00:50:43.481984 1179266 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 00:50:43.481990 1179266 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 00:50:43.481998 1179266 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 00:50:43.482005 1179266 command_runner.go:130] > [crio.api]
	I1212 00:50:43.482011 1179266 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 00:50:43.482019 1179266 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 00:50:43.482029 1179266 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 00:50:43.482035 1179266 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 00:50:43.482044 1179266 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 00:50:43.482061 1179266 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 00:50:43.482066 1179266 command_runner.go:130] > # stream_port = "0"
	I1212 00:50:43.482073 1179266 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 00:50:43.482268 1179266 command_runner.go:130] > # stream_enable_tls = false
	I1212 00:50:43.482284 1179266 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 00:50:43.482290 1179266 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 00:50:43.482297 1179266 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 00:50:43.482305 1179266 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 00:50:43.482312 1179266 command_runner.go:130] > # minutes.
	I1212 00:50:43.482320 1179266 command_runner.go:130] > # stream_tls_cert = ""
	I1212 00:50:43.482328 1179266 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 00:50:43.482338 1179266 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 00:50:43.482344 1179266 command_runner.go:130] > # stream_tls_key = ""
	I1212 00:50:43.482351 1179266 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 00:50:43.482364 1179266 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 00:50:43.482372 1179266 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 00:50:43.482377 1179266 command_runner.go:130] > # stream_tls_ca = ""
	I1212 00:50:43.482390 1179266 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 00:50:43.482396 1179266 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 00:50:43.482405 1179266 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 00:50:43.482414 1179266 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 00:50:43.482428 1179266 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 00:50:43.482438 1179266 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 00:50:43.482444 1179266 command_runner.go:130] > [crio.runtime]
	I1212 00:50:43.482452 1179266 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 00:50:43.482463 1179266 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 00:50:43.482469 1179266 command_runner.go:130] > # "nofile=1024:2048"
	I1212 00:50:43.482477 1179266 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 00:50:43.482482 1179266 command_runner.go:130] > # default_ulimits = [
	I1212 00:50:43.482486 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.482494 1179266 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 00:50:43.482500 1179266 command_runner.go:130] > # no_pivot = false
	I1212 00:50:43.482507 1179266 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 00:50:43.482518 1179266 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 00:50:43.482541 1179266 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 00:50:43.482553 1179266 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 00:50:43.482560 1179266 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 00:50:43.482577 1179266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:50:43.482582 1179266 command_runner.go:130] > # conmon = ""
	I1212 00:50:43.482593 1179266 command_runner.go:130] > # Cgroup setting for conmon
	I1212 00:50:43.482602 1179266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 00:50:43.482607 1179266 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 00:50:43.482614 1179266 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 00:50:43.482621 1179266 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 00:50:43.482632 1179266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:50:43.482638 1179266 command_runner.go:130] > # conmon_env = [
	I1212 00:50:43.482811 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.482826 1179266 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 00:50:43.482833 1179266 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 00:50:43.482840 1179266 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 00:50:43.482845 1179266 command_runner.go:130] > # default_env = [
	I1212 00:50:43.482850 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.482860 1179266 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 00:50:43.482865 1179266 command_runner.go:130] > # selinux = false
	I1212 00:50:43.482874 1179266 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 00:50:43.482885 1179266 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 00:50:43.482893 1179266 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 00:50:43.482904 1179266 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:50:43.482911 1179266 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 00:50:43.482917 1179266 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 00:50:43.482930 1179266 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 00:50:43.482936 1179266 command_runner.go:130] > # which might increase security.
	I1212 00:50:43.482941 1179266 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1212 00:50:43.482951 1179266 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 00:50:43.482964 1179266 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 00:50:43.482972 1179266 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 00:50:43.482979 1179266 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 00:50:43.482990 1179266 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:50:43.482996 1179266 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 00:50:43.483002 1179266 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 00:50:43.483014 1179266 command_runner.go:130] > # the cgroup blockio controller.
	I1212 00:50:43.483172 1179266 command_runner.go:130] > # blockio_config_file = ""
	I1212 00:50:43.483189 1179266 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 00:50:43.483195 1179266 command_runner.go:130] > # irqbalance daemon.
	I1212 00:50:43.483202 1179266 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 00:50:43.483215 1179266 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 00:50:43.483221 1179266 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:50:43.483227 1179266 command_runner.go:130] > # rdt_config_file = ""
	I1212 00:50:43.483234 1179266 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 00:50:43.483241 1179266 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 00:50:43.483249 1179266 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 00:50:43.483254 1179266 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 00:50:43.483262 1179266 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 00:50:43.483275 1179266 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 00:50:43.483280 1179266 command_runner.go:130] > # will be added.
	I1212 00:50:43.483285 1179266 command_runner.go:130] > # default_capabilities = [
	I1212 00:50:43.483292 1179266 command_runner.go:130] > # 	"CHOWN",
	I1212 00:50:43.483298 1179266 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 00:50:43.483303 1179266 command_runner.go:130] > # 	"FSETID",
	I1212 00:50:43.483309 1179266 command_runner.go:130] > # 	"FOWNER",
	I1212 00:50:43.483490 1179266 command_runner.go:130] > # 	"SETGID",
	I1212 00:50:43.483503 1179266 command_runner.go:130] > # 	"SETUID",
	I1212 00:50:43.483509 1179266 command_runner.go:130] > # 	"SETPCAP",
	I1212 00:50:43.483514 1179266 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 00:50:43.483519 1179266 command_runner.go:130] > # 	"KILL",
	I1212 00:50:43.483523 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.483533 1179266 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 00:50:43.483542 1179266 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 00:50:43.483552 1179266 command_runner.go:130] > # add_inheritable_capabilities = true
	I1212 00:50:43.483560 1179266 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 00:50:43.483567 1179266 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:50:43.483719 1179266 command_runner.go:130] > # default_sysctls = [
	I1212 00:50:43.483733 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.483739 1179266 command_runner.go:130] > # List of devices on the host that a
	I1212 00:50:43.483747 1179266 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 00:50:43.483752 1179266 command_runner.go:130] > # allowed_devices = [
	I1212 00:50:43.483757 1179266 command_runner.go:130] > # 	"/dev/fuse",
	I1212 00:50:43.483761 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.483767 1179266 command_runner.go:130] > # List of additional devices. specified as
	I1212 00:50:43.483783 1179266 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 00:50:43.483799 1179266 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 00:50:43.483807 1179266 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:50:43.483816 1179266 command_runner.go:130] > # additional_devices = [
	I1212 00:50:43.483821 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.483828 1179266 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 00:50:43.483836 1179266 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 00:50:43.483841 1179266 command_runner.go:130] > # 	"/etc/cdi",
	I1212 00:50:43.483847 1179266 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 00:50:43.483855 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.483862 1179266 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 00:50:43.483871 1179266 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 00:50:43.483876 1179266 command_runner.go:130] > # Defaults to false.
	I1212 00:50:43.484046 1179266 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 00:50:43.484062 1179266 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 00:50:43.484070 1179266 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 00:50:43.484078 1179266 command_runner.go:130] > # hooks_dir = [
	I1212 00:50:43.484083 1179266 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 00:50:43.484088 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.484095 1179266 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 00:50:43.484105 1179266 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 00:50:43.484112 1179266 command_runner.go:130] > # its default mounts from the following two files:
	I1212 00:50:43.484123 1179266 command_runner.go:130] > #
	I1212 00:50:43.484131 1179266 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 00:50:43.484139 1179266 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 00:50:43.484149 1179266 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 00:50:43.484153 1179266 command_runner.go:130] > #
	I1212 00:50:43.484161 1179266 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 00:50:43.484172 1179266 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 00:50:43.484180 1179266 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 00:50:43.484192 1179266 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 00:50:43.484200 1179266 command_runner.go:130] > #
	I1212 00:50:43.484206 1179266 command_runner.go:130] > # default_mounts_file = ""
	I1212 00:50:43.484212 1179266 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 00:50:43.484221 1179266 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 00:50:43.484229 1179266 command_runner.go:130] > # pids_limit = 0
	I1212 00:50:43.484237 1179266 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 00:50:43.484248 1179266 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 00:50:43.484258 1179266 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 00:50:43.484267 1179266 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 00:50:43.484273 1179266 command_runner.go:130] > # log_size_max = -1
	I1212 00:50:43.484281 1179266 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 00:50:43.484448 1179266 command_runner.go:130] > # log_to_journald = false
	I1212 00:50:43.484465 1179266 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 00:50:43.484482 1179266 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 00:50:43.484489 1179266 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 00:50:43.484496 1179266 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 00:50:43.484502 1179266 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 00:50:43.484507 1179266 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 00:50:43.484520 1179266 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 00:50:43.484526 1179266 command_runner.go:130] > # read_only = false
	I1212 00:50:43.484534 1179266 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 00:50:43.484544 1179266 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 00:50:43.484553 1179266 command_runner.go:130] > # live configuration reload.
	I1212 00:50:43.484558 1179266 command_runner.go:130] > # log_level = "info"
	I1212 00:50:43.484565 1179266 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 00:50:43.484572 1179266 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:50:43.484577 1179266 command_runner.go:130] > # log_filter = ""
	I1212 00:50:43.484584 1179266 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 00:50:43.484592 1179266 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 00:50:43.484601 1179266 command_runner.go:130] > # separated by comma.
	I1212 00:50:43.484606 1179266 command_runner.go:130] > # uid_mappings = ""
	I1212 00:50:43.484614 1179266 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 00:50:43.484626 1179266 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 00:50:43.484631 1179266 command_runner.go:130] > # separated by comma.
	I1212 00:50:43.484860 1179266 command_runner.go:130] > # gid_mappings = ""
	I1212 00:50:43.484882 1179266 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 00:50:43.484891 1179266 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:50:43.484898 1179266 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:50:43.484903 1179266 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 00:50:43.484916 1179266 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 00:50:43.484923 1179266 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:50:43.484931 1179266 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:50:43.484939 1179266 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 00:50:43.484946 1179266 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 00:50:43.484954 1179266 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 00:50:43.484961 1179266 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 00:50:43.484971 1179266 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 00:50:43.484979 1179266 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 00:50:43.484990 1179266 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 00:50:43.484996 1179266 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 00:50:43.485002 1179266 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 00:50:43.485008 1179266 command_runner.go:130] > # drop_infra_ctr = true
	I1212 00:50:43.485016 1179266 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 00:50:43.485038 1179266 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 00:50:43.485047 1179266 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 00:50:43.485056 1179266 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 00:50:43.485064 1179266 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 00:50:43.485070 1179266 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 00:50:43.485373 1179266 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 00:50:43.485395 1179266 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 00:50:43.485404 1179266 command_runner.go:130] > # pinns_path = ""
	I1212 00:50:43.485411 1179266 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:50:43.485419 1179266 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 00:50:43.485427 1179266 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 00:50:43.485445 1179266 command_runner.go:130] > # default_runtime = "runc"
	I1212 00:50:43.485455 1179266 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 00:50:43.485470 1179266 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 00:50:43.485487 1179266 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 00:50:43.485494 1179266 command_runner.go:130] > # creation as a file is not desired either.
	I1212 00:50:43.485504 1179266 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 00:50:43.485514 1179266 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 00:50:43.485520 1179266 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 00:50:43.485524 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.485533 1179266 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 00:50:43.485541 1179266 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 00:50:43.485549 1179266 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 00:50:43.485557 1179266 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 00:50:43.485565 1179266 command_runner.go:130] > #
	I1212 00:50:43.485598 1179266 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 00:50:43.485612 1179266 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 00:50:43.485617 1179266 command_runner.go:130] > #  runtime_type = "oci"
	I1212 00:50:43.485623 1179266 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 00:50:43.485631 1179266 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 00:50:43.485642 1179266 command_runner.go:130] > #  allowed_annotations = []
	I1212 00:50:43.485647 1179266 command_runner.go:130] > # Where:
	I1212 00:50:43.485653 1179266 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 00:50:43.485662 1179266 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 00:50:43.485673 1179266 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 00:50:43.485681 1179266 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 00:50:43.485688 1179266 command_runner.go:130] > #   in $PATH.
	I1212 00:50:43.485695 1179266 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 00:50:43.485701 1179266 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 00:50:43.485717 1179266 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 00:50:43.485722 1179266 command_runner.go:130] > #   state.
	I1212 00:50:43.485734 1179266 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 00:50:43.485741 1179266 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 00:50:43.485749 1179266 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 00:50:43.485756 1179266 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 00:50:43.485776 1179266 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 00:50:43.485785 1179266 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 00:50:43.485791 1179266 command_runner.go:130] > #   The currently recognized values are:
	I1212 00:50:43.485799 1179266 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 00:50:43.485808 1179266 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 00:50:43.485819 1179266 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 00:50:43.485826 1179266 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 00:50:43.485842 1179266 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 00:50:43.485850 1179266 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 00:50:43.485858 1179266 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 00:50:43.485866 1179266 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 00:50:43.485873 1179266 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 00:50:43.485879 1179266 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 00:50:43.485885 1179266 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1212 00:50:43.485890 1179266 command_runner.go:130] > runtime_type = "oci"
	I1212 00:50:43.485901 1179266 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 00:50:43.485906 1179266 command_runner.go:130] > runtime_config_path = ""
	I1212 00:50:43.485911 1179266 command_runner.go:130] > monitor_path = ""
	I1212 00:50:43.485916 1179266 command_runner.go:130] > monitor_cgroup = ""
	I1212 00:50:43.485922 1179266 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:50:43.485966 1179266 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 00:50:43.485977 1179266 command_runner.go:130] > # running containers
	I1212 00:50:43.485984 1179266 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 00:50:43.485992 1179266 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 00:50:43.486000 1179266 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 00:50:43.486013 1179266 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 00:50:43.486020 1179266 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 00:50:43.486025 1179266 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 00:50:43.486035 1179266 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 00:50:43.486040 1179266 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 00:50:43.486046 1179266 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 00:50:43.486059 1179266 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 00:50:43.486134 1179266 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 00:50:43.486147 1179266 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 00:50:43.486155 1179266 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 00:50:43.486165 1179266 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 00:50:43.486174 1179266 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 00:50:43.486182 1179266 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 00:50:43.486212 1179266 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 00:50:43.486229 1179266 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 00:50:43.486236 1179266 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 00:50:43.486245 1179266 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 00:50:43.486249 1179266 command_runner.go:130] > # Example:
	I1212 00:50:43.486256 1179266 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 00:50:43.486262 1179266 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 00:50:43.486269 1179266 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 00:50:43.486276 1179266 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 00:50:43.486281 1179266 command_runner.go:130] > # cpuset = 0
	I1212 00:50:43.486286 1179266 command_runner.go:130] > # cpushares = "0-1"
	I1212 00:50:43.486299 1179266 command_runner.go:130] > # Where:
	I1212 00:50:43.486305 1179266 command_runner.go:130] > # The workload name is workload-type.
	I1212 00:50:43.486314 1179266 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 00:50:43.486321 1179266 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 00:50:43.486328 1179266 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 00:50:43.486338 1179266 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 00:50:43.486349 1179266 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 00:50:43.486360 1179266 command_runner.go:130] > # 
	I1212 00:50:43.486374 1179266 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 00:50:43.486379 1179266 command_runner.go:130] > #
	I1212 00:50:43.486393 1179266 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 00:50:43.486401 1179266 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 00:50:43.486409 1179266 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 00:50:43.486417 1179266 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 00:50:43.486424 1179266 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 00:50:43.486428 1179266 command_runner.go:130] > [crio.image]
	I1212 00:50:43.486435 1179266 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 00:50:43.486676 1179266 command_runner.go:130] > # default_transport = "docker://"
	I1212 00:50:43.486692 1179266 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 00:50:43.486700 1179266 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:50:43.486706 1179266 command_runner.go:130] > # global_auth_file = ""
	I1212 00:50:43.486712 1179266 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 00:50:43.486719 1179266 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:50:43.486726 1179266 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 00:50:43.486735 1179266 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 00:50:43.486756 1179266 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:50:43.486763 1179266 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:50:43.486768 1179266 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 00:50:43.486775 1179266 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 00:50:43.486783 1179266 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 00:50:43.486790 1179266 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 00:50:43.486801 1179266 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 00:50:43.486807 1179266 command_runner.go:130] > # pause_command = "/pause"
	I1212 00:50:43.486814 1179266 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 00:50:43.486826 1179266 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 00:50:43.486837 1179266 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 00:50:43.486845 1179266 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 00:50:43.486852 1179266 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 00:50:43.486858 1179266 command_runner.go:130] > # signature_policy = ""
	I1212 00:50:43.486866 1179266 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 00:50:43.486874 1179266 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 00:50:43.486879 1179266 command_runner.go:130] > # changing them here.
	I1212 00:50:43.486892 1179266 command_runner.go:130] > # insecure_registries = [
	I1212 00:50:43.486897 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.486905 1179266 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 00:50:43.486915 1179266 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 00:50:43.487074 1179266 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 00:50:43.487103 1179266 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 00:50:43.487109 1179266 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 00:50:43.487117 1179266 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 00:50:43.487122 1179266 command_runner.go:130] > # CNI plugins.
	I1212 00:50:43.487133 1179266 command_runner.go:130] > [crio.network]
	I1212 00:50:43.487141 1179266 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 00:50:43.487148 1179266 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 00:50:43.487169 1179266 command_runner.go:130] > # cni_default_network = ""
	I1212 00:50:43.487177 1179266 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 00:50:43.487183 1179266 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 00:50:43.487190 1179266 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 00:50:43.487198 1179266 command_runner.go:130] > # plugin_dirs = [
	I1212 00:50:43.487203 1179266 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 00:50:43.487208 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.487215 1179266 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 00:50:43.487221 1179266 command_runner.go:130] > [crio.metrics]
	I1212 00:50:43.487227 1179266 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 00:50:43.487232 1179266 command_runner.go:130] > # enable_metrics = false
	I1212 00:50:43.487245 1179266 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 00:50:43.487250 1179266 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 00:50:43.487258 1179266 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 00:50:43.487265 1179266 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 00:50:43.487273 1179266 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 00:50:43.487279 1179266 command_runner.go:130] > # metrics_collectors = [
	I1212 00:50:43.487283 1179266 command_runner.go:130] > # 	"operations",
	I1212 00:50:43.487295 1179266 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 00:50:43.487302 1179266 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 00:50:43.487310 1179266 command_runner.go:130] > # 	"operations_errors",
	I1212 00:50:43.487316 1179266 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 00:50:43.487323 1179266 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 00:50:43.487328 1179266 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 00:50:43.487336 1179266 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 00:50:43.487341 1179266 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 00:50:43.487347 1179266 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 00:50:43.487352 1179266 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 00:50:43.487357 1179266 command_runner.go:130] > # 	"containers_oom_total",
	I1212 00:50:43.487609 1179266 command_runner.go:130] > # 	"containers_oom",
	I1212 00:50:43.487626 1179266 command_runner.go:130] > # 	"processes_defunct",
	I1212 00:50:43.487633 1179266 command_runner.go:130] > # 	"operations_total",
	I1212 00:50:43.487639 1179266 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 00:50:43.487644 1179266 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 00:50:43.487653 1179266 command_runner.go:130] > # 	"operations_errors_total",
	I1212 00:50:43.487658 1179266 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 00:50:43.487664 1179266 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 00:50:43.487682 1179266 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 00:50:43.487693 1179266 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 00:50:43.487701 1179266 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 00:50:43.487707 1179266 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 00:50:43.487711 1179266 command_runner.go:130] > # ]
	I1212 00:50:43.487718 1179266 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 00:50:43.487723 1179266 command_runner.go:130] > # metrics_port = 9090
	I1212 00:50:43.487732 1179266 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 00:50:43.487738 1179266 command_runner.go:130] > # metrics_socket = ""
	I1212 00:50:43.487744 1179266 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 00:50:43.487752 1179266 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 00:50:43.487763 1179266 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 00:50:43.487770 1179266 command_runner.go:130] > # certificate on any modification event.
	I1212 00:50:43.487775 1179266 command_runner.go:130] > # metrics_cert = ""
	I1212 00:50:43.487782 1179266 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 00:50:43.487790 1179266 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 00:50:43.487796 1179266 command_runner.go:130] > # metrics_key = ""
	I1212 00:50:43.487803 1179266 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 00:50:43.487810 1179266 command_runner.go:130] > [crio.tracing]
	I1212 00:50:43.487818 1179266 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 00:50:43.488037 1179266 command_runner.go:130] > # enable_tracing = false
	I1212 00:50:43.488053 1179266 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 00:50:43.488060 1179266 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 00:50:43.488066 1179266 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 00:50:43.488072 1179266 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 00:50:43.488080 1179266 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 00:50:43.488089 1179266 command_runner.go:130] > [crio.stats]
	I1212 00:50:43.488100 1179266 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 00:50:43.488109 1179266 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 00:50:43.488117 1179266 command_runner.go:130] > # stats_collection_period = 0
	I1212 00:50:43.489989 1179266 command_runner.go:130] ! time="2023-12-12 00:50:43.478203575Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1212 00:50:43.490011 1179266 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 00:50:43.490227 1179266 cni.go:84] Creating CNI manager for ""
	I1212 00:50:43.490244 1179266 cni.go:136] 2 nodes found, recommending kindnet
	I1212 00:50:43.490258 1179266 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:50:43.490287 1179266 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-270339 NodeName:multinode-270339-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:50:43.490441 1179266 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-270339-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:50:43.490509 1179266 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-270339-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-270339 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 00:50:43.490584 1179266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:50:43.505067 1179266 command_runner.go:130] > kubeadm
	I1212 00:50:43.505094 1179266 command_runner.go:130] > kubectl
	I1212 00:50:43.505100 1179266 command_runner.go:130] > kubelet
	I1212 00:50:43.506495 1179266 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:50:43.506594 1179266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 00:50:43.517584 1179266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 00:50:43.541028 1179266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:50:43.562200 1179266 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:50:43.566631 1179266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:50:43.579995 1179266 host.go:66] Checking if "multinode-270339" exists ...
	I1212 00:50:43.580273 1179266 start.go:304] JoinCluster: &{Name:multinode-270339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-270339 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:50:43.580364 1179266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 00:50:43.580418 1179266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:50:43.580784 1179266 config.go:182] Loaded profile config "multinode-270339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:50:43.598612 1179266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34085 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa Username:docker}
	I1212 00:50:43.774104 1179266 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ts55zl.7vdmicgdyy3qtwmg --discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 
	I1212 00:50:43.774148 1179266 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 00:50:43.774192 1179266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ts55zl.7vdmicgdyy3qtwmg --discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-270339-m02"
	I1212 00:50:43.821624 1179266 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 00:50:43.868666 1179266 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:50:43.868687 1179266 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I1212 00:50:43.868694 1179266 command_runner.go:130] > OS: Linux
	I1212 00:50:43.868701 1179266 command_runner.go:130] > CGROUPS_CPU: enabled
	I1212 00:50:43.868708 1179266 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1212 00:50:43.868714 1179266 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1212 00:50:43.868720 1179266 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1212 00:50:43.868726 1179266 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1212 00:50:43.868732 1179266 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1212 00:50:43.868741 1179266 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1212 00:50:43.868747 1179266 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1212 00:50:43.868754 1179266 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1212 00:50:43.980649 1179266 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 00:50:43.980674 1179266 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 00:50:44.022202 1179266 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:50:44.022526 1179266 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:50:44.022547 1179266 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 00:50:44.126229 1179266 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 00:50:46.649358 1179266 command_runner.go:130] > This node has joined the cluster:
	I1212 00:50:46.649381 1179266 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 00:50:46.649389 1179266 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 00:50:46.649397 1179266 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 00:50:46.652440 1179266 command_runner.go:130] ! W1212 00:50:43.821104    1022 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 00:50:46.652468 1179266 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1212 00:50:46.652483 1179266 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:50:46.652500 1179266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ts55zl.7vdmicgdyy3qtwmg --discovery-token-ca-cert-hash sha256:423d166c085e277a11bea519bc38c8d176eb97d5c6d6f0fd8c403765ff119d59 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-270339-m02": (2.878295321s)
	I1212 00:50:46.652521 1179266 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 00:50:46.867316 1179266 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1212 00:50:46.867417 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4 minikube.k8s.io/name=multinode-270339 minikube.k8s.io/updated_at=2023_12_12T00_50_46_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:50:47.004035 1179266 command_runner.go:130] > node/multinode-270339-m02 labeled
	I1212 00:50:47.004076 1179266 start.go:306] JoinCluster complete in 3.423802285s
	I1212 00:50:47.004089 1179266 cni.go:84] Creating CNI manager for ""
	I1212 00:50:47.004095 1179266 cni.go:136] 2 nodes found, recommending kindnet
	I1212 00:50:47.004156 1179266 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:50:47.009376 1179266 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 00:50:47.009402 1179266 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1212 00:50:47.009410 1179266 command_runner.go:130] > Device: 3ah/58d	Inode: 1572675     Links: 1
	I1212 00:50:47.009421 1179266 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:50:47.009428 1179266 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1212 00:50:47.009435 1179266 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1212 00:50:47.009441 1179266 command_runner.go:130] > Change: 2023-12-12 00:11:51.729537575 +0000
	I1212 00:50:47.009451 1179266 command_runner.go:130] >  Birth: 2023-12-12 00:11:51.689538767 +0000
	I1212 00:50:47.009856 1179266 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:50:47.009875 1179266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:50:47.033924 1179266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:50:47.310749 1179266 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 00:50:47.317192 1179266 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 00:50:47.320379 1179266 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 00:50:47.340666 1179266 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 00:50:47.347088 1179266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:50:47.347339 1179266 kapi.go:59] client config for multinode-270339: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:50:47.347656 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 00:50:47.347666 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:47.347675 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:47.347681 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:47.351612 1179266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:50:47.351633 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:47.351642 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:47.351648 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:47.351671 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:47.351685 1179266 round_trippers.go:580]     Content-Length: 291
	I1212 00:50:47.351692 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:47 GMT
	I1212 00:50:47.351699 1179266 round_trippers.go:580]     Audit-Id: bdf32782-618d-4e18-ba22-f7159b6b882a
	I1212 00:50:47.351709 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:47.351893 1179266 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"069b2802-3295-4313-81b9-da639a5d7429","resourceVersion":"410","creationTimestamp":"2023-12-12T00:50:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 00:50:47.351985 1179266 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-270339" context rescaled to 1 replicas
	I1212 00:50:47.352008 1179266 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 00:50:47.354144 1179266 out.go:177] * Verifying Kubernetes components...
	I1212 00:50:47.356523 1179266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:50:47.373272 1179266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:50:47.373544 1179266 kapi.go:59] client config for multinode-270339: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/multinode-270339/client.key", CAFile:"/home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7710), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:50:47.373808 1179266 node_ready.go:35] waiting up to 6m0s for node "multinode-270339-m02" to be "Ready" ...
	I1212 00:50:47.373868 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:47.373887 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:47.373896 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:47.373903 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:47.376256 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:47.376279 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:47.376288 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:47 GMT
	I1212 00:50:47.376294 1179266 round_trippers.go:580]     Audit-Id: 62cec0e2-e47f-408f-912d-b94411ed7c33
	I1212 00:50:47.376301 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:47.376307 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:47.376313 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:47.376319 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:47.376442 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"451","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1212 00:50:47.376851 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:47.376866 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:47.376875 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:47.376886 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:47.378944 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:47.378964 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:47.378972 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:47.378979 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:47 GMT
	I1212 00:50:47.378985 1179266 round_trippers.go:580]     Audit-Id: 87313474-8567-47e1-8336-479b36efe05d
	I1212 00:50:47.378992 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:47.378999 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:47.379005 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:47.379115 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"451","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1212 00:50:47.880195 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:47.880222 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:47.880233 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:47.880240 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:47.882663 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:47.882689 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:47.882697 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:47.882704 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:47.882710 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:47.882717 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:47.882723 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:47 GMT
	I1212 00:50:47.882733 1179266 round_trippers.go:580]     Audit-Id: 491a72ab-50a0-45e6-b092-08879d572e9a
	I1212 00:50:47.882878 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"451","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1212 00:50:48.379822 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:48.379845 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:48.379855 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:48.379862 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:48.382299 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:48.382325 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:48.382334 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:48.382340 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:48.382347 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:48.382354 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:48 GMT
	I1212 00:50:48.382362 1179266 round_trippers.go:580]     Audit-Id: 04dd93b5-c977-4aae-ad9e-56bbefab49b2
	I1212 00:50:48.382370 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:48.382468 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"451","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1212 00:50:48.880138 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:48.880176 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:48.880187 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:48.880196 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:48.882520 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:48.882542 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:48.882551 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:48.882557 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:48.882563 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:48.882571 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:48.882578 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:48 GMT
	I1212 00:50:48.882584 1179266 round_trippers.go:580]     Audit-Id: f60d74e0-283f-4ad3-946b-0e9b069dc6f6
	I1212 00:50:48.882731 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"451","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1212 00:50:49.379701 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:49.379725 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:49.379735 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:49.379746 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:49.382258 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:49.382286 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:49.382295 1179266 round_trippers.go:580]     Audit-Id: def89a83-fbeb-4985-9566-98e591b1ee68
	I1212 00:50:49.382304 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:49.382311 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:49.382317 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:49.382324 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:49.382333 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:49 GMT
	I1212 00:50:49.382569 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"451","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1212 00:50:49.382952 1179266 node_ready.go:58] node "multinode-270339-m02" has status "Ready":"False"
	I1212 00:50:49.880648 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:49.880671 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:49.880683 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:49.880690 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:49.883183 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:49.883204 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:49.883212 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:49.883218 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:49.883225 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:49.883231 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:49.883237 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:49 GMT
	I1212 00:50:49.883244 1179266 round_trippers.go:580]     Audit-Id: 030b78e5-a67f-48bb-aebc-2a6ad0748527
	I1212 00:50:49.883385 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:50.380121 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:50.380146 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:50.380156 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:50.380163 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:50.382479 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:50.382500 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:50.382515 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:50.382522 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:50.382528 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:50.382534 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:50.382541 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:50 GMT
	I1212 00:50:50.382548 1179266 round_trippers.go:580]     Audit-Id: 9857725a-93cb-4064-8fae-ac4a1b9ca060
	I1212 00:50:50.382653 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:50.880589 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:50.880611 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:50.880622 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:50.880629 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:50.883261 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:50.883286 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:50.883294 1179266 round_trippers.go:580]     Audit-Id: b079ebfc-819d-42f2-a232-a65320a772d7
	I1212 00:50:50.883301 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:50.883307 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:50.883314 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:50.883321 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:50.883328 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:50 GMT
	I1212 00:50:50.883465 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:51.380177 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:51.380199 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:51.380209 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:51.380217 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:51.382703 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:51.382736 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:51.382745 1179266 round_trippers.go:580]     Audit-Id: cf595fb1-f753-4a0a-86b7-155341871683
	I1212 00:50:51.382752 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:51.382758 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:51.382764 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:51.382771 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:51.382783 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:51 GMT
	I1212 00:50:51.382949 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:51.383331 1179266 node_ready.go:58] node "multinode-270339-m02" has status "Ready":"False"
	I1212 00:50:51.879608 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:51.879631 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:51.879640 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:51.879647 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:51.882150 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:51.882169 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:51.882178 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:51.882184 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:51.882191 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:51.882197 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:51 GMT
	I1212 00:50:51.882204 1179266 round_trippers.go:580]     Audit-Id: b9589344-f208-4e18-a92b-09d88929fcee
	I1212 00:50:51.882210 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:51.882361 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:52.380642 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:52.380665 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:52.380675 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:52.380683 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:52.383081 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:52.383104 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:52.383112 1179266 round_trippers.go:580]     Audit-Id: d97493c5-0d93-4ac1-b0d7-84edf37082dc
	I1212 00:50:52.383118 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:52.383125 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:52.383131 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:52.383138 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:52.383145 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:52 GMT
	I1212 00:50:52.383254 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:52.880058 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:52.880099 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:52.880109 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:52.880117 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:52.882718 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:52.882737 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:52.882746 1179266 round_trippers.go:580]     Audit-Id: 62851ac9-874d-4ce7-9640-83b6c3bc2037
	I1212 00:50:52.882753 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:52.882759 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:52.882765 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:52.882771 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:52.882777 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:52 GMT
	I1212 00:50:52.882913 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:53.380058 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:53.380089 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:53.380098 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:53.380106 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:53.382922 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:53.382950 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:53.382959 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:53.382966 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:53 GMT
	I1212 00:50:53.382973 1179266 round_trippers.go:580]     Audit-Id: 2dea34e8-11c4-4530-a43a-dbcbffd0df72
	I1212 00:50:53.382979 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:53.382985 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:53.382991 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:53.383114 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:53.383502 1179266 node_ready.go:58] node "multinode-270339-m02" has status "Ready":"False"
	I1212 00:50:53.880343 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:53.880364 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:53.880375 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:53.880382 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:53.882889 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:53.882913 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:53.882921 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:53.882928 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:53.882934 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:53.882941 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:53.882947 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:53 GMT
	I1212 00:50:53.882953 1179266 round_trippers.go:580]     Audit-Id: 5f35e452-b4a2-4648-808d-45eab9e7e291
	I1212 00:50:53.883139 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:54.380241 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:54.380268 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:54.380278 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:54.380285 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:54.382778 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:54.382808 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:54.382817 1179266 round_trippers.go:580]     Audit-Id: 4e98b1b6-a6bb-4fdf-8b77-8e66b8164704
	I1212 00:50:54.382823 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:54.382829 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:54.382835 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:54.382842 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:54.382852 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:54 GMT
	I1212 00:50:54.382968 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:54.879875 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:54.879900 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:54.879911 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:54.879924 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:54.882358 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:54.882383 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:54.882392 1179266 round_trippers.go:580]     Audit-Id: f270d870-3ffc-4199-9d17-6944bbd53c86
	I1212 00:50:54.882399 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:54.882404 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:54.882411 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:54.882418 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:54.882424 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:54 GMT
	I1212 00:50:54.882569 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:55.379635 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:55.379661 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:55.379671 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:55.379678 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:55.382167 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:55.382190 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:55.382199 1179266 round_trippers.go:580]     Audit-Id: f89cb85c-0e68-4fdf-aa95-f759cdb799e3
	I1212 00:50:55.382206 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:55.382212 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:55.382218 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:55.382273 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:55.382280 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:55 GMT
	I1212 00:50:55.382384 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:55.880506 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:55.880528 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:55.880538 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:55.880545 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:55.882928 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:55.882957 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:55.882966 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:55.882973 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:55 GMT
	I1212 00:50:55.882980 1179266 round_trippers.go:580]     Audit-Id: 4b4289b3-6ddd-4ef7-9c5e-c091646445f0
	I1212 00:50:55.882987 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:55.882996 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:55.883008 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:55.883307 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:55.883708 1179266 node_ready.go:58] node "multinode-270339-m02" has status "Ready":"False"
	I1212 00:50:56.380488 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:56.380509 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:56.380519 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:56.380527 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:56.383039 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:56.383064 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:56.383074 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:56.383081 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:56.383088 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:56.383098 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:56 GMT
	I1212 00:50:56.383105 1179266 round_trippers.go:580]     Audit-Id: a41435c2-9952-4941-9de6-77fc3ed979ae
	I1212 00:50:56.383115 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:56.383265 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"468","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1212 00:50:56.880427 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:56.880454 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:56.880465 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:56.880472 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:56.882805 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:56.882827 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:56.882835 1179266 round_trippers.go:580]     Audit-Id: 0b3f063c-b223-49d2-9fc3-89be32130cca
	I1212 00:50:56.882842 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:56.882848 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:56.882855 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:56.882862 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:56.882868 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:56 GMT
	I1212 00:50:56.883069 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"475","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1212 00:50:57.379768 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:57.379794 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:57.379804 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:57.379811 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:57.382160 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:57.382178 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:57.382187 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:57.382193 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:57.382199 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:57 GMT
	I1212 00:50:57.382206 1179266 round_trippers.go:580]     Audit-Id: 28524bf6-957f-4976-aa88-c05ea6959847
	I1212 00:50:57.382212 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:57.382218 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:57.382357 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"475","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1212 00:50:57.880034 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:57.880057 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:57.880068 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:57.880075 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:57.882700 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:57.882726 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:57.882735 1179266 round_trippers.go:580]     Audit-Id: c02fbc0b-e41f-4bca-ae5c-be13a2d72c99
	I1212 00:50:57.882741 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:57.882748 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:57.882754 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:57.882761 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:57.882768 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:57 GMT
	I1212 00:50:57.882893 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"475","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1212 00:50:58.379711 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:58.379735 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:58.379746 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:58.379754 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:58.382285 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:58.382306 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:58.382315 1179266 round_trippers.go:580]     Audit-Id: 25cdcb67-8131-437d-9779-540846b7a3c1
	I1212 00:50:58.382321 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:58.382328 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:58.382334 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:58.382341 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:58.382351 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:58 GMT
	I1212 00:50:58.382468 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"475","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1212 00:50:58.382869 1179266 node_ready.go:58] node "multinode-270339-m02" has status "Ready":"False"
	I1212 00:50:58.880389 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:58.880418 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:58.880428 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:58.880435 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:58.883197 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:58.883216 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:58.883224 1179266 round_trippers.go:580]     Audit-Id: 955635f2-9106-4d4d-b66b-14efc1c73545
	I1212 00:50:58.883230 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:58.883236 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:58.883242 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:58.883248 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:58.883256 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:58 GMT
	I1212 00:50:58.883377 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"475","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1212 00:50:59.380442 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:59.380465 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:59.380476 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:59.380487 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:59.382998 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:59.383025 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:59.383033 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:59.383040 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:59.383047 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:59.383053 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:59 GMT
	I1212 00:50:59.383059 1179266 round_trippers.go:580]     Audit-Id: 7e0d44aa-0408-4321-91ae-abf356bfc4bb
	I1212 00:50:59.383066 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:59.383167 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"475","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1212 00:50:59.880286 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:50:59.880315 1179266 round_trippers.go:469] Request Headers:
	I1212 00:50:59.880326 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:50:59.880334 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:50:59.882810 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:50:59.882836 1179266 round_trippers.go:577] Response Headers:
	I1212 00:50:59.882845 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:50:59.882851 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:50:59.882860 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:50:59 GMT
	I1212 00:50:59.882867 1179266 round_trippers.go:580]     Audit-Id: 70784393-078a-47ee-ad77-e14946fe7903
	I1212 00:50:59.882873 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:50:59.882882 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:50:59.883040 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"475","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1212 00:51:00.379758 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:51:00.379780 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:00.379790 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:00.379798 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:00.382630 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:00.382653 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:00.382662 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:00.382670 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:00.382676 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:00.382682 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:00.382688 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:00 GMT
	I1212 00:51:00.382699 1179266 round_trippers.go:580]     Audit-Id: 95d9aec8-9e63-4a59-9a24-5a9bdfee52fc
	I1212 00:51:00.382824 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"475","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1212 00:51:00.383274 1179266 node_ready.go:58] node "multinode-270339-m02" has status "Ready":"False"
	I1212 00:51:00.879671 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:51:00.879695 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:00.879704 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:00.879713 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:00.882255 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:00.882277 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:00.882286 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:00.882302 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:00.882314 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:00.882325 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:00 GMT
	I1212 00:51:00.882331 1179266 round_trippers.go:580]     Audit-Id: 5dde50b6-c1cd-4c60-a7b2-a296ab4fe60a
	I1212 00:51:00.882343 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:00.882512 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"475","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1212 00:51:01.379624 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:51:01.379646 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.379656 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.379665 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.382202 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.382222 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.382230 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.382237 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.382244 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.382250 1179266 round_trippers.go:580]     Audit-Id: c2a226eb-500a-4bbb-8d36-93c04b155bf6
	I1212 00:51:01.382257 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.382263 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.382419 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"491","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I1212 00:51:01.382817 1179266 node_ready.go:49] node "multinode-270339-m02" has status "Ready":"True"
	I1212 00:51:01.382835 1179266 node_ready.go:38] duration metric: took 14.00901582s waiting for node "multinode-270339-m02" to be "Ready" ...
	I1212 00:51:01.382846 1179266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:51:01.382911 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 00:51:01.382921 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.382929 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.382936 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.386422 1179266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:51:01.386445 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.386454 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.386462 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.386474 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.386480 1179266 round_trippers.go:580]     Audit-Id: 01032387-46d1-4589-a022-471db70140f3
	I1212 00:51:01.386486 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.386495 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.386896 1179266 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"494"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7n4rj","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"16efc97c-281e-4ae4-89a2-7c7507db2e8f","resourceVersion":"406","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"80aaeecd-d2e6-429b-972f-733cb0b597ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80aaeecd-d2e6-429b-972f-733cb0b597ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1212 00:51:01.389802 1179266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7n4rj" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.389883 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7n4rj
	I1212 00:51:01.389893 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.389902 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.389909 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.392194 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.392216 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.392225 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.392232 1179266 round_trippers.go:580]     Audit-Id: 43bc1331-0ce6-4aff-8169-a54624c76a6f
	I1212 00:51:01.392238 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.392244 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.392251 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.392260 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.392657 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7n4rj","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"16efc97c-281e-4ae4-89a2-7c7507db2e8f","resourceVersion":"406","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"80aaeecd-d2e6-429b-972f-733cb0b597ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80aaeecd-d2e6-429b-972f-733cb0b597ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1212 00:51:01.393150 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:51:01.393167 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.393176 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.393183 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.395397 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.395416 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.395423 1179266 round_trippers.go:580]     Audit-Id: c28d4e0a-8d7f-4674-a999-423ffa0dd003
	I1212 00:51:01.395430 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.395436 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.395442 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.395448 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.395454 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.395580 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:51:01.395956 1179266 pod_ready.go:92] pod "coredns-5dd5756b68-7n4rj" in "kube-system" namespace has status "Ready":"True"
	I1212 00:51:01.395968 1179266 pod_ready.go:81] duration metric: took 6.14081ms waiting for pod "coredns-5dd5756b68-7n4rj" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.395979 1179266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.396032 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-270339
	I1212 00:51:01.396037 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.396044 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.396051 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.398305 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.398324 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.398334 1179266 round_trippers.go:580]     Audit-Id: 13761ad9-4553-46bd-bfce-be2e71c0a3f2
	I1212 00:51:01.398340 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.398346 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.398352 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.398358 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.398364 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.398521 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-270339","namespace":"kube-system","uid":"67c26bc8-7478-40c7-b698-9d505f2d9108","resourceVersion":"416","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.mirror":"fa312a672de2e7002f7da391295c4da1","kubernetes.io/config.seen":"2023-12-12T00:50:11.291444657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1212 00:51:01.398953 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:51:01.398962 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.398970 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.398977 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.401062 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.401139 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.401162 1179266 round_trippers.go:580]     Audit-Id: 8e8a91fc-eb56-4ee3-b89a-1850be20efe5
	I1212 00:51:01.401197 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.401220 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.401240 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.401295 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.401309 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.401427 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:51:01.401798 1179266 pod_ready.go:92] pod "etcd-multinode-270339" in "kube-system" namespace has status "Ready":"True"
	I1212 00:51:01.401816 1179266 pod_ready.go:81] duration metric: took 5.830084ms waiting for pod "etcd-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.401833 1179266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.401913 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-270339
	I1212 00:51:01.401925 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.401933 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.401940 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.404108 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.404125 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.404133 1179266 round_trippers.go:580]     Audit-Id: 1f8cc115-43c2-4062-ae91-7ad30f6fd79d
	I1212 00:51:01.404139 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.404145 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.404157 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.404164 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.404170 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.404331 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-270339","namespace":"kube-system","uid":"69200dae-a0c0-4c7a-8615-5b242ea522fa","resourceVersion":"413","creationTimestamp":"2023-12-12T00:50:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"85f57a6e9d4169ea7d73b4da63f983eb","kubernetes.io/config.mirror":"85f57a6e9d4169ea7d73b4da63f983eb","kubernetes.io/config.seen":"2023-12-12T00:50:02.864494058Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1212 00:51:01.404828 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:51:01.404836 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.404844 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.404851 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.406886 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.406946 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.406965 1179266 round_trippers.go:580]     Audit-Id: 348d864b-5e62-425f-9c1f-4e22eb011ad3
	I1212 00:51:01.406973 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.406979 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.406986 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.406992 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.407002 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.407261 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:51:01.407648 1179266 pod_ready.go:92] pod "kube-apiserver-multinode-270339" in "kube-system" namespace has status "Ready":"True"
	I1212 00:51:01.407665 1179266 pod_ready.go:81] duration metric: took 5.825743ms waiting for pod "kube-apiserver-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.407676 1179266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.407733 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-270339
	I1212 00:51:01.407743 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.407751 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.407758 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.409869 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.409890 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.409897 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.409903 1179266 round_trippers.go:580]     Audit-Id: 56ccfead-3b24-4535-a10c-5e9fa74daeab
	I1212 00:51:01.409910 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.409916 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.409925 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.409932 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.410066 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-270339","namespace":"kube-system","uid":"71e4b30c-7581-45a1-a740-05dd4689cd8d","resourceVersion":"414","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"69899ce532b3e555ace218f70ebfd6a0","kubernetes.io/config.mirror":"69899ce532b3e555ace218f70ebfd6a0","kubernetes.io/config.seen":"2023-12-12T00:50:11.291446897Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1212 00:51:01.410543 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:51:01.410557 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.410565 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.410572 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.412572 1179266 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:51:01.412592 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.412611 1179266 round_trippers.go:580]     Audit-Id: 132e31c6-18cb-405d-bf2f-543b8728b314
	I1212 00:51:01.412618 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.412624 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.412630 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.412637 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.412646 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.412748 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:51:01.413112 1179266 pod_ready.go:92] pod "kube-controller-manager-multinode-270339" in "kube-system" namespace has status "Ready":"True"
	I1212 00:51:01.413128 1179266 pod_ready.go:81] duration metric: took 5.441755ms waiting for pod "kube-controller-manager-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.413139 1179266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5cp6" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.580519 1179266 request.go:629] Waited for 167.311064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5cp6
	I1212 00:51:01.580584 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5cp6
	I1212 00:51:01.580593 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.580602 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.580610 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.583161 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.583185 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.583195 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.583202 1179266 round_trippers.go:580]     Audit-Id: a9ab5843-a404-4ea7-a756-88c995c11e29
	I1212 00:51:01.583209 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.583215 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.583227 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.583236 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.583353 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f5cp6","generateName":"kube-proxy-","namespace":"kube-system","uid":"76981ae8-cfac-4127-9e85-c999ae8117b5","resourceVersion":"485","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9309196b-e55a-4237-bd1a-ef2d9d1fc1f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9309196b-e55a-4237-bd1a-ef2d9d1fc1f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1212 00:51:01.780077 1179266 request.go:629] Waited for 196.243248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:51:01.780163 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339-m02
	I1212 00:51:01.780169 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.780178 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.780190 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.782800 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.782826 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.782835 1179266 round_trippers.go:580]     Audit-Id: 7e176780-9b2e-4d90-bcb7-c3f21dad2f1a
	I1212 00:51:01.782844 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.782850 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.782864 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.782871 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.782884 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.783173 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339-m02","uid":"8b4de589-7a89-452b-accb-60e7c84f2e47","resourceVersion":"491","creationTimestamp":"2023-12-12T00:50:46Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_50_46_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I1212 00:51:01.783577 1179266 pod_ready.go:92] pod "kube-proxy-f5cp6" in "kube-system" namespace has status "Ready":"True"
	I1212 00:51:01.783596 1179266 pod_ready.go:81] duration metric: took 370.443292ms waiting for pod "kube-proxy-f5cp6" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.783608 1179266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ff2v2" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:01.980008 1179266 request.go:629] Waited for 196.31216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff2v2
	I1212 00:51:01.980071 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff2v2
	I1212 00:51:01.980078 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:01.980087 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:01.980098 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:01.982726 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:01.982784 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:01.982805 1179266 round_trippers.go:580]     Audit-Id: a45c480e-d886-4e06-b6ae-d26571692cfc
	I1212 00:51:01.982825 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:01.982843 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:01.982879 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:01.982892 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:01.982899 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:01 GMT
	I1212 00:51:01.983054 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ff2v2","generateName":"kube-proxy-","namespace":"kube-system","uid":"e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf","resourceVersion":"384","creationTimestamp":"2023-12-12T00:50:25Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9309196b-e55a-4237-bd1a-ef2d9d1fc1f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9309196b-e55a-4237-bd1a-ef2d9d1fc1f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1212 00:51:02.179769 1179266 request.go:629] Waited for 196.220742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:51:02.179849 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:51:02.179856 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:02.179865 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:02.179877 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:02.182480 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:02.182520 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:02.182528 1179266 round_trippers.go:580]     Audit-Id: 2634a108-13f1-4f8d-af85-37bd15608a4d
	I1212 00:51:02.182535 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:02.182541 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:02.182547 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:02.182554 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:02.182561 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:02 GMT
	I1212 00:51:02.182679 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:51:02.183093 1179266 pod_ready.go:92] pod "kube-proxy-ff2v2" in "kube-system" namespace has status "Ready":"True"
	I1212 00:51:02.183110 1179266 pod_ready.go:81] duration metric: took 399.491338ms waiting for pod "kube-proxy-ff2v2" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:02.183121 1179266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:02.380068 1179266 request.go:629] Waited for 196.859574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-270339
	I1212 00:51:02.380146 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-270339
	I1212 00:51:02.380159 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:02.380169 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:02.380176 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:02.382739 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:02.382763 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:02.382772 1179266 round_trippers.go:580]     Audit-Id: 4babb98f-5af6-44d2-898d-2df1e5547fc5
	I1212 00:51:02.382778 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:02.382802 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:02.382817 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:02.382823 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:02.382829 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:02 GMT
	I1212 00:51:02.382962 1179266 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-270339","namespace":"kube-system","uid":"2d786047-a353-4b4c-94da-4b02d6191903","resourceVersion":"415","creationTimestamp":"2023-12-12T00:50:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ba500f988f5556d7475d9e018b4ec918","kubernetes.io/config.mirror":"ba500f988f5556d7475d9e018b4ec918","kubernetes.io/config.seen":"2023-12-12T00:50:11.291439472Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:50:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1212 00:51:02.579626 1179266 request.go:629] Waited for 196.243239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:51:02.579737 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-270339
	I1212 00:51:02.579756 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:02.579767 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:02.579774 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:02.582157 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:02.582178 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:02.582187 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:02.582211 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:02 GMT
	I1212 00:51:02.582228 1179266 round_trippers.go:580]     Audit-Id: a0e303b8-c4c5-4d92-b21e-270aba47f156
	I1212 00:51:02.582236 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:02.582245 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:02.582252 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:02.582802 1179266 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T00:50:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1212 00:51:02.583197 1179266 pod_ready.go:92] pod "kube-scheduler-multinode-270339" in "kube-system" namespace has status "Ready":"True"
	I1212 00:51:02.583216 1179266 pod_ready.go:81] duration metric: took 400.083557ms waiting for pod "kube-scheduler-multinode-270339" in "kube-system" namespace to be "Ready" ...
	I1212 00:51:02.583228 1179266 pod_ready.go:38] duration metric: took 1.200368297s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:51:02.583246 1179266 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:51:02.583299 1179266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:51:02.597047 1179266 system_svc.go:56] duration metric: took 13.793247ms WaitForService to wait for kubelet.
	I1212 00:51:02.597075 1179266 kubeadm.go:581] duration metric: took 15.245044777s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:51:02.597153 1179266 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:51:02.780411 1179266 request.go:629] Waited for 183.181122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1212 00:51:02.780464 1179266 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1212 00:51:02.780477 1179266 round_trippers.go:469] Request Headers:
	I1212 00:51:02.780488 1179266 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:51:02.780504 1179266 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1212 00:51:02.783028 1179266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:51:02.783053 1179266 round_trippers.go:577] Response Headers:
	I1212 00:51:02.783062 1179266 round_trippers.go:580]     Content-Type: application/json
	I1212 00:51:02.783068 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5f9cefe-032e-4f25-a07a-526561e3de8c
	I1212 00:51:02.783074 1179266 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26de7b73-767f-4411-8d2c-59ab499f2cf9
	I1212 00:51:02.783082 1179266 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:51:02 GMT
	I1212 00:51:02.783088 1179266 round_trippers.go:580]     Audit-Id: f519765c-6180-4da1-a4ce-19e999b8f6cf
	I1212 00:51:02.783094 1179266 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:51:02.783267 1179266 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"495"},"items":[{"metadata":{"name":"multinode-270339","uid":"395137a9-e389-480f-aed1-016c814e9fd7","resourceVersion":"389","creationTimestamp":"2023-12-12T00:50:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-270339","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-270339","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_50_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 13004 chars]
	I1212 00:51:02.783923 1179266 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:51:02.783943 1179266 node_conditions.go:123] node cpu capacity is 2
	I1212 00:51:02.783953 1179266 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:51:02.783963 1179266 node_conditions.go:123] node cpu capacity is 2
	I1212 00:51:02.783969 1179266 node_conditions.go:105] duration metric: took 186.809483ms to run NodePressure ...
	I1212 00:51:02.783984 1179266 start.go:228] waiting for startup goroutines ...
	I1212 00:51:02.784012 1179266 start.go:242] writing updated cluster config ...
	I1212 00:51:02.784325 1179266 ssh_runner.go:195] Run: rm -f paused
	I1212 00:51:02.845774 1179266 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 00:51:02.849756 1179266 out.go:177] * Done! kubectl is now configured to use "multinode-270339" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 12 00:50:29 multinode-270339 crio[896]: time="2023-12-12 00:50:29.117860461Z" level=info msg="Starting container: d38bea35db4eea0f6d8b82ea7ab2519ed578d51cda9e4b65a62313cc32cb95e6" id=f3f35740-1cef-4620-a1f7-143b16fb2594 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:50:29 multinode-270339 crio[896]: time="2023-12-12 00:50:29.127065556Z" level=info msg="Started container" PID=1937 containerID=d38bea35db4eea0f6d8b82ea7ab2519ed578d51cda9e4b65a62313cc32cb95e6 description=kube-system/storage-provisioner/storage-provisioner id=f3f35740-1cef-4620-a1f7-143b16fb2594 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f5b151e23b7bba417f6830ede41e22a74d4b7c9f504485831f986f39a76919fc
	Dec 12 00:50:29 multinode-270339 crio[896]: time="2023-12-12 00:50:29.154269119Z" level=info msg="Created container cf26fc01cec9931782e26a7db50ab318e84127da65ce481cc50d9455ac137634: kube-system/coredns-5dd5756b68-7n4rj/coredns" id=00a7d052-0ae4-467f-be55-da2db9425949 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:50:29 multinode-270339 crio[896]: time="2023-12-12 00:50:29.154768180Z" level=info msg="Starting container: cf26fc01cec9931782e26a7db50ab318e84127da65ce481cc50d9455ac137634" id=5cef2dd8-081f-4003-8617-0a17ce0605d9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:50:29 multinode-270339 crio[896]: time="2023-12-12 00:50:29.176503290Z" level=info msg="Started container" PID=1960 containerID=cf26fc01cec9931782e26a7db50ab318e84127da65ce481cc50d9455ac137634 description=kube-system/coredns-5dd5756b68-7n4rj/coredns id=5cef2dd8-081f-4003-8617-0a17ce0605d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90f200cd75218df8e91ccda886f9e9530aa82bfe868f8faf357549dd3d694884
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.038448268Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-tqh9c/POD" id=89e563cc-85e7-49bc-9449-62e4d2728a7c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.038508968Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.075188872Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-tqh9c Namespace:default ID:4e89894ff8294e540a97b0a016a2d820426677e7cc9b22046eb8c92172217068 UID:1a8e36a0-7fdb-4664-be87-79fdcb2ae14b NetNS:/var/run/netns/27013c8f-5cff-4d9a-85c8-6c25dd7dcc56 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.075221536Z" level=info msg="Adding pod default_busybox-5bc68d56bd-tqh9c to CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.087088769Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-tqh9c Namespace:default ID:4e89894ff8294e540a97b0a016a2d820426677e7cc9b22046eb8c92172217068 UID:1a8e36a0-7fdb-4664-be87-79fdcb2ae14b NetNS:/var/run/netns/27013c8f-5cff-4d9a-85c8-6c25dd7dcc56 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.087239657Z" level=info msg="Checking pod default_busybox-5bc68d56bd-tqh9c for CNI network kindnet (type=ptp)"
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.091094317Z" level=info msg="Ran pod sandbox 4e89894ff8294e540a97b0a016a2d820426677e7cc9b22046eb8c92172217068 with infra container: default/busybox-5bc68d56bd-tqh9c/POD" id=89e563cc-85e7-49bc-9449-62e4d2728a7c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.092183827Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=84df1298-d645-4ae1-8c06-edc88766e6bf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.092396538Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=84df1298-d645-4ae1-8c06-edc88766e6bf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.093102133Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=202ae9b0-c731-4cde-b789-dd4bcefc8225 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.094106238Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 12 00:51:04 multinode-270339 crio[896]: time="2023-12-12 00:51:04.754742354Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 12 00:51:05 multinode-270339 crio[896]: time="2023-12-12 00:51:05.857155213Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=202ae9b0-c731-4cde-b789-dd4bcefc8225 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:51:05 multinode-270339 crio[896]: time="2023-12-12 00:51:05.858126753Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=ca7bc07f-e6ad-465a-a72c-282a7ad42ae7 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:51:05 multinode-270339 crio[896]: time="2023-12-12 00:51:05.858801021Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ca7bc07f-e6ad-465a-a72c-282a7ad42ae7 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:51:05 multinode-270339 crio[896]: time="2023-12-12 00:51:05.860285744Z" level=info msg="Creating container: default/busybox-5bc68d56bd-tqh9c/busybox" id=a8144beb-96a7-467c-a94e-59dccf0546fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:51:05 multinode-270339 crio[896]: time="2023-12-12 00:51:05.860380625Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 12 00:51:05 multinode-270339 crio[896]: time="2023-12-12 00:51:05.918106290Z" level=info msg="Created container d685a5231e98903cca9368f2c9aa236662430adee3ce480224cde5bcfa5548b5: default/busybox-5bc68d56bd-tqh9c/busybox" id=a8144beb-96a7-467c-a94e-59dccf0546fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:51:05 multinode-270339 crio[896]: time="2023-12-12 00:51:05.918864339Z" level=info msg="Starting container: d685a5231e98903cca9368f2c9aa236662430adee3ce480224cde5bcfa5548b5" id=a56df8fe-1e67-4d10-b804-94d7bd04f144 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:51:05 multinode-270339 crio[896]: time="2023-12-12 00:51:05.927881255Z" level=info msg="Started container" PID=2095 containerID=d685a5231e98903cca9368f2c9aa236662430adee3ce480224cde5bcfa5548b5 description=default/busybox-5bc68d56bd-tqh9c/busybox id=a56df8fe-1e67-4d10-b804-94d7bd04f144 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e89894ff8294e540a97b0a016a2d820426677e7cc9b22046eb8c92172217068
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d685a5231e989       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   4 seconds ago        Running             busybox                   0                   4e89894ff8294       busybox-5bc68d56bd-tqh9c
	cf26fc01cec99       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      41 seconds ago       Running             coredns                   0                   90f200cd75218       coredns-5dd5756b68-7n4rj
	d38bea35db4ee       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      41 seconds ago       Running             storage-provisioner       0                   f5b151e23b7bb       storage-provisioner
	185bc8b1d199a       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      43 seconds ago       Running             kindnet-cni               0                   9b00f2e3baedf       kindnet-529wf
	75203dcdcf9cd       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      43 seconds ago       Running             kube-proxy                0                   a1a72b6e0ae62       kube-proxy-ff2v2
	59d7e7798d350       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   6155d19b32a98       etcd-multinode-270339
	a2b27ea3e250d       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   b4dd1dbe82eda       kube-scheduler-multinode-270339
	de16e9a7fe41e       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   9651e1e08ad31       kube-apiserver-multinode-270339
	2a8457eecb179       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   4588531e684f8       kube-controller-manager-multinode-270339
	
	* 
	* ==> coredns [cf26fc01cec9931782e26a7db50ab318e84127da65ce481cc50d9455ac137634] <==
	* [INFO] 10.244.1.2:42428 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000777052s
	[INFO] 10.244.0.3:41499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097803s
	[INFO] 10.244.0.3:54936 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00108443s
	[INFO] 10.244.0.3:60807 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088071s
	[INFO] 10.244.0.3:50652 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000665s
	[INFO] 10.244.0.3:53146 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000872901s
	[INFO] 10.244.0.3:56898 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064236s
	[INFO] 10.244.0.3:57085 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053693s
	[INFO] 10.244.0.3:42117 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062152s
	[INFO] 10.244.1.2:40954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160553s
	[INFO] 10.244.1.2:33861 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068371s
	[INFO] 10.244.1.2:54101 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062652s
	[INFO] 10.244.1.2:58639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066731s
	[INFO] 10.244.0.3:54045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186735s
	[INFO] 10.244.0.3:57819 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148656s
	[INFO] 10.244.0.3:59197 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081655s
	[INFO] 10.244.0.3:48788 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009s
	[INFO] 10.244.1.2:33127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102848s
	[INFO] 10.244.1.2:58911 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126527s
	[INFO] 10.244.1.2:46614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103907s
	[INFO] 10.244.1.2:48533 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106828s
	[INFO] 10.244.0.3:49976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116066s
	[INFO] 10.244.0.3:33515 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000071136s
	[INFO] 10.244.0.3:50502 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000069372s
	[INFO] 10.244.0.3:60446 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000062201s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-270339
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-270339
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=multinode-270339
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T00_50_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:50:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-270339
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:51:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:50:28 +0000   Tue, 12 Dec 2023 00:50:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:50:28 +0000   Tue, 12 Dec 2023 00:50:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:50:28 +0000   Tue, 12 Dec 2023 00:50:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:50:28 +0000   Tue, 12 Dec 2023 00:50:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-270339
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd8e7245d1cc459fa2ee380629572c9f
	  System UUID:                62e83803-ea5e-4019-93df-09289dde205d
	  Boot ID:                    1e71add7-2409-4eb4-97fc-c7110220f3c5
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-tqh9c                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-7n4rj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     45s
	  kube-system                 etcd-multinode-270339                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 kindnet-529wf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      45s
	  kube-system                 kube-apiserver-multinode-270339             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-controller-manager-multinode-270339    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-proxy-ff2v2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-scheduler-multinode-270339             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 43s   kube-proxy       
	  Normal  Starting                 59s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s   kubelet          Node multinode-270339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s   kubelet          Node multinode-270339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s   kubelet          Node multinode-270339 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s   node-controller  Node multinode-270339 event: Registered Node multinode-270339 in Controller
	  Normal  NodeReady                42s   kubelet          Node multinode-270339 status is now: NodeReady
	
	
	Name:               multinode-270339-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-270339-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=multinode-270339
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T00_50_46_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:50:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-270339-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:51:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:51:00 +0000   Tue, 12 Dec 2023 00:50:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:51:00 +0000   Tue, 12 Dec 2023 00:50:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:51:00 +0000   Tue, 12 Dec 2023 00:50:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:51:00 +0000   Tue, 12 Dec 2023 00:51:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-270339-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 597ba1cce03642dfb9a997b360d466ed
	  System UUID:                ea5f5ec5-11e8-48bd-95dd-c61c08e6953c
	  Boot ID:                    1e71add7-2409-4eb4-97fc-c7110220f3c5
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-f7wq7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-khbts               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      24s
	  kube-system                 kube-proxy-f5cp6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  NodeHasSufficientMemory  24s (x5 over 25s)  kubelet          Node multinode-270339-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x5 over 25s)  kubelet          Node multinode-270339-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x5 over 25s)  kubelet          Node multinode-270339-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21s                node-controller  Node multinode-270339-m02 event: Registered Node multinode-270339-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-270339-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001117] FS-Cache: O-key=[8] '12633b0000000000'
	[  +0.000754] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000973] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=0000000059a16183
	[  +0.001084] FS-Cache: N-key=[8] '12633b0000000000'
	[  +0.003102] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001029] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000006a4eadc9
	[  +0.001098] FS-Cache: O-key=[8] '12633b0000000000'
	[  +0.000729] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=00000000ef12e937
	[  +0.001096] FS-Cache: N-key=[8] '12633b0000000000'
	[  +1.721638] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001038] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000009ed47378
	[  +0.001181] FS-Cache: O-key=[8] '11633b0000000000'
	[  +0.000791] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000997] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=0000000059a16183
	[  +0.001129] FS-Cache: N-key=[8] '11633b0000000000'
	[  +0.334169] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=0000000058fb07ab{9p.inode} n=000000009942789b
	[  +0.001136] FS-Cache: O-key=[8] '17633b0000000000'
	[  +0.000746] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=0000000058fb07ab{9p.inode} n=000000006ac44817
	[  +0.001100] FS-Cache: N-key=[8] '17633b0000000000'
	
	* 
	* ==> etcd [59d7e7798d35004408850f78339788ffbfded31458cf1adb8d7081dc63280c41] <==
	* {"level":"info","ts":"2023-12-12T00:50:03.76143Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-12T00:50:03.761473Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-12T00:50:03.7615Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T00:50:03.761533Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T00:50:03.761542Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T00:50:03.76197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-12-12T00:50:03.762127Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-12-12T00:50:04.733299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T00:50:04.733428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T00:50:04.733471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-12-12T00:50:04.733525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T00:50:04.733557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-12T00:50:04.7336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T00:50:04.733637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-12T00:50:04.737417Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-270339 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T00:50:04.737643Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:50:04.737769Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:50:04.739021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-12-12T00:50:04.739152Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:50:04.754909Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T00:50:04.755061Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T00:50:04.755106Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T00:50:04.755065Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:50:04.755221Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:50:04.75528Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  00:51:11 up  7:33,  0 users,  load average: 1.70, 1.91, 1.23
	Linux multinode-270339 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [185bc8b1d199adae21d33d114048b84465d56212860d966b469dd7215df258fd] <==
	* I1212 00:50:27.825972       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 00:50:27.826046       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1212 00:50:27.826173       1 main.go:116] setting mtu 1500 for CNI 
	I1212 00:50:27.826184       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 00:50:27.826198       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 00:50:28.219272       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 00:50:28.219307       1 main.go:227] handling current node
	I1212 00:50:38.330416       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 00:50:38.330444       1 main.go:227] handling current node
	I1212 00:50:48.342680       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 00:50:48.342706       1 main.go:227] handling current node
	I1212 00:50:48.342717       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1212 00:50:48.342722       1 main.go:250] Node multinode-270339-m02 has CIDR [10.244.1.0/24] 
	I1212 00:50:48.342866       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1212 00:50:58.350490       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 00:50:58.350519       1 main.go:227] handling current node
	I1212 00:50:58.350532       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1212 00:50:58.350539       1 main.go:250] Node multinode-270339-m02 has CIDR [10.244.1.0/24] 
	I1212 00:51:08.354892       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 00:51:08.354924       1 main.go:227] handling current node
	I1212 00:51:08.354935       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1212 00:51:08.354941       1 main.go:250] Node multinode-270339-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [de16e9a7fe41ea7a2109dd69ccb1dc5027fd22075fb96d6cf98c508f95a14a7e] <==
	* I1212 00:50:08.531442       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:50:08.532885       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 00:50:08.533454       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 00:50:08.533585       1 aggregator.go:166] initial CRD sync complete...
	I1212 00:50:08.533621       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 00:50:08.533650       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:50:08.533678       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:50:08.536712       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 00:50:08.537570       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 00:50:08.724627       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:50:09.237076       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 00:50:09.241813       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 00:50:09.241929       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:50:09.779976       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:50:09.820089       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:50:09.950538       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 00:50:09.958840       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1212 00:50:09.959926       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 00:50:09.963909       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 00:50:10.458302       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 00:50:11.218634       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 00:50:11.231573       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 00:50:11.244358       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 00:50:24.788812       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 00:50:25.393951       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [2a8457eecb179f22f689cf5790918bccc2d0a36c2b04da1237f4f7bf7bf12ec9] <==
	* I1212 00:50:26.128287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.130034ms"
	I1212 00:50:26.128393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.561µs"
	I1212 00:50:28.693116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.925µs"
	I1212 00:50:28.711307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.547µs"
	I1212 00:50:29.478669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.310981ms"
	I1212 00:50:29.478839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.077µs"
	I1212 00:50:29.524610       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 00:50:46.552429       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270339-m02\" does not exist"
	I1212 00:50:46.575271       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-khbts"
	I1212 00:50:46.584339       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f5cp6"
	I1212 00:50:46.584539       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-270339-m02" podCIDRs=["10.244.1.0/24"]
	I1212 00:50:49.527930       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-270339-m02"
	I1212 00:50:49.528274       1 event.go:307] "Event occurred" object="multinode-270339-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-270339-m02 event: Registered Node multinode-270339-m02 in Controller"
	I1212 00:51:00.951451       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-270339-m02"
	I1212 00:51:03.693167       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1212 00:51:03.709676       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-f7wq7"
	I1212 00:51:03.719125       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-tqh9c"
	I1212 00:51:03.750494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.144448ms"
	I1212 00:51:03.773291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.746034ms"
	I1212 00:51:03.773355       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.067µs"
	I1212 00:51:04.546132       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-f7wq7" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-f7wq7"
	I1212 00:51:06.197293       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.722835ms"
	I1212 00:51:06.197371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.017µs"
	I1212 00:51:06.523744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.094439ms"
	I1212 00:51:06.524552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.964µs"
	
	* 
	* ==> kube-proxy [75203dcdcf9cd5b8b139ef30bf82905175776e2bf0a65a20e03a482750c7a035] <==
	* I1212 00:50:27.870232       1 server_others.go:69] "Using iptables proxy"
	I1212 00:50:27.884651       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1212 00:50:27.910524       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:50:27.912659       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:50:27.912693       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 00:50:27.912701       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 00:50:27.912774       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:50:27.913005       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:50:27.913020       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:50:27.914360       1 config.go:188] "Starting service config controller"
	I1212 00:50:27.914380       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:50:27.914399       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:50:27.914403       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:50:27.917080       1 config.go:315] "Starting node config controller"
	I1212 00:50:27.917098       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:50:28.015194       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:50:28.015206       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:50:28.017231       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a2b27ea3e250df6e4daba6c8f78a40c0aa437f0958df53f8bbe1cc762764e708] <==
	* W1212 00:50:08.487834       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:50:08.487888       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 00:50:08.487981       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 00:50:08.488029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 00:50:08.488104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 00:50:08.488142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 00:50:08.488214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:50:08.488250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:50:08.488383       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:50:08.488423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 00:50:08.488505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 00:50:08.488567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 00:50:09.325833       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:50:09.325954       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 00:50:09.359597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 00:50:09.359635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 00:50:09.414280       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 00:50:09.414389       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 00:50:09.471080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 00:50:09.471181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 00:50:09.527004       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:50:09.527038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 00:50:09.551433       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 00:50:09.551471       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 00:50:11.776645       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 00:50:25 multinode-270339 kubelet[1384]: I1212 00:50:25.616958    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxqqt\" (UniqueName: \"kubernetes.io/projected/e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf-kube-api-access-dxqqt\") pod \"kube-proxy-ff2v2\" (UID: \"e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf\") " pod="kube-system/kube-proxy-ff2v2"
	Dec 12 00:50:25 multinode-270339 kubelet[1384]: I1212 00:50:25.616981    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c92bbff9-fd78-417d-844a-71166788153a-cni-cfg\") pod \"kindnet-529wf\" (UID: \"c92bbff9-fd78-417d-844a-71166788153a\") " pod="kube-system/kindnet-529wf"
	Dec 12 00:50:26 multinode-270339 kubelet[1384]: E1212 00:50:26.719056    1384 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:50:26 multinode-270339 kubelet[1384]: E1212 00:50:26.719191    1384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf-kube-proxy podName:e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf nodeName:}" failed. No retries permitted until 2023-12-12 00:50:27.219157963 +0000 UTC m=+16.034172361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf-kube-proxy") pod "kube-proxy-ff2v2" (UID: "e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:50:26 multinode-270339 kubelet[1384]: E1212 00:50:26.903393    1384 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:50:26 multinode-270339 kubelet[1384]: E1212 00:50:26.903456    1384 projected.go:198] Error preparing data for projected volume kube-api-access-m58nl for pod kube-system/kindnet-529wf: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:50:26 multinode-270339 kubelet[1384]: E1212 00:50:26.903543    1384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c92bbff9-fd78-417d-844a-71166788153a-kube-api-access-m58nl podName:c92bbff9-fd78-417d-844a-71166788153a nodeName:}" failed. No retries permitted until 2023-12-12 00:50:27.403521186 +0000 UTC m=+16.218535584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m58nl" (UniqueName: "kubernetes.io/projected/c92bbff9-fd78-417d-844a-71166788153a-kube-api-access-m58nl") pod "kindnet-529wf" (UID: "c92bbff9-fd78-417d-844a-71166788153a") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:50:26 multinode-270339 kubelet[1384]: E1212 00:50:26.916800    1384 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:50:26 multinode-270339 kubelet[1384]: E1212 00:50:26.916843    1384 projected.go:198] Error preparing data for projected volume kube-api-access-dxqqt for pod kube-system/kube-proxy-ff2v2: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:50:26 multinode-270339 kubelet[1384]: E1212 00:50:26.916919    1384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf-kube-api-access-dxqqt podName:e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf nodeName:}" failed. No retries permitted until 2023-12-12 00:50:27.416895615 +0000 UTC m=+16.231910012 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dxqqt" (UniqueName: "kubernetes.io/projected/e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf-kube-api-access-dxqqt") pod "kube-proxy-ff2v2" (UID: "e0f8e0fe-73dc-4aed-b0ce-cfbbe59813cf") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 00:50:28 multinode-270339 kubelet[1384]: I1212 00:50:28.459595    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-529wf" podStartSLOduration=3.459541456 podCreationTimestamp="2023-12-12 00:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 00:50:28.448430147 +0000 UTC m=+17.263444553" watchObservedRunningTime="2023-12-12 00:50:28.459541456 +0000 UTC m=+17.274555854"
	Dec 12 00:50:28 multinode-270339 kubelet[1384]: I1212 00:50:28.665263    1384 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 00:50:28 multinode-270339 kubelet[1384]: I1212 00:50:28.691550    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ff2v2" podStartSLOduration=3.691508098 podCreationTimestamp="2023-12-12 00:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 00:50:28.464892235 +0000 UTC m=+17.279906633" watchObservedRunningTime="2023-12-12 00:50:28.691508098 +0000 UTC m=+17.506522496"
	Dec 12 00:50:28 multinode-270339 kubelet[1384]: I1212 00:50:28.691853    1384 topology_manager.go:215] "Topology Admit Handler" podUID="16efc97c-281e-4ae4-89a2-7c7507db2e8f" podNamespace="kube-system" podName="coredns-5dd5756b68-7n4rj"
	Dec 12 00:50:28 multinode-270339 kubelet[1384]: I1212 00:50:28.695113    1384 topology_manager.go:215] "Topology Admit Handler" podUID="667961d7-7931-4d8a-8b56-5d72e1687ab3" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 00:50:28 multinode-270339 kubelet[1384]: I1212 00:50:28.739939    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rklmv\" (UniqueName: \"kubernetes.io/projected/16efc97c-281e-4ae4-89a2-7c7507db2e8f-kube-api-access-rklmv\") pod \"coredns-5dd5756b68-7n4rj\" (UID: \"16efc97c-281e-4ae4-89a2-7c7507db2e8f\") " pod="kube-system/coredns-5dd5756b68-7n4rj"
	Dec 12 00:50:28 multinode-270339 kubelet[1384]: I1212 00:50:28.739994    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c549j\" (UniqueName: \"kubernetes.io/projected/667961d7-7931-4d8a-8b56-5d72e1687ab3-kube-api-access-c549j\") pod \"storage-provisioner\" (UID: \"667961d7-7931-4d8a-8b56-5d72e1687ab3\") " pod="kube-system/storage-provisioner"
	Dec 12 00:50:28 multinode-270339 kubelet[1384]: I1212 00:50:28.740022    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16efc97c-281e-4ae4-89a2-7c7507db2e8f-config-volume\") pod \"coredns-5dd5756b68-7n4rj\" (UID: \"16efc97c-281e-4ae4-89a2-7c7507db2e8f\") " pod="kube-system/coredns-5dd5756b68-7n4rj"
	Dec 12 00:50:28 multinode-270339 kubelet[1384]: I1212 00:50:28.740057    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/667961d7-7931-4d8a-8b56-5d72e1687ab3-tmp\") pod \"storage-provisioner\" (UID: \"667961d7-7931-4d8a-8b56-5d72e1687ab3\") " pod="kube-system/storage-provisioner"
	Dec 12 00:50:29 multinode-270339 kubelet[1384]: W1212 00:50:29.029184    1384 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8cbfcb2f926f2933e9f6ac3a1ae628335b89b5892c0a645f94e42abd1790dda6/crio-f5b151e23b7bba417f6830ede41e22a74d4b7c9f504485831f986f39a76919fc WatchSource:0}: Error finding container f5b151e23b7bba417f6830ede41e22a74d4b7c9f504485831f986f39a76919fc: Status 404 returned error can't find the container with id f5b151e23b7bba417f6830ede41e22a74d4b7c9f504485831f986f39a76919fc
	Dec 12 00:50:29 multinode-270339 kubelet[1384]: I1212 00:50:29.466864    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.46682161 podCreationTimestamp="2023-12-12 00:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 00:50:29.451819523 +0000 UTC m=+18.266833929" watchObservedRunningTime="2023-12-12 00:50:29.46682161 +0000 UTC m=+18.281836008"
	Dec 12 00:50:31 multinode-270339 kubelet[1384]: I1212 00:50:31.350418    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-7n4rj" podStartSLOduration=6.350374025 podCreationTimestamp="2023-12-12 00:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 00:50:29.467064319 +0000 UTC m=+18.282078717" watchObservedRunningTime="2023-12-12 00:50:31.350374025 +0000 UTC m=+20.165388423"
	Dec 12 00:51:03 multinode-270339 kubelet[1384]: I1212 00:51:03.736395    1384 topology_manager.go:215] "Topology Admit Handler" podUID="1a8e36a0-7fdb-4664-be87-79fdcb2ae14b" podNamespace="default" podName="busybox-5bc68d56bd-tqh9c"
	Dec 12 00:51:03 multinode-270339 kubelet[1384]: I1212 00:51:03.856939    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfwfj\" (UniqueName: \"kubernetes.io/projected/1a8e36a0-7fdb-4664-be87-79fdcb2ae14b-kube-api-access-dfwfj\") pod \"busybox-5bc68d56bd-tqh9c\" (UID: \"1a8e36a0-7fdb-4664-be87-79fdcb2ae14b\") " pod="default/busybox-5bc68d56bd-tqh9c"
	Dec 12 00:51:04 multinode-270339 kubelet[1384]: W1212 00:51:04.089063    1384 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8cbfcb2f926f2933e9f6ac3a1ae628335b89b5892c0a645f94e42abd1790dda6/crio-4e89894ff8294e540a97b0a016a2d820426677e7cc9b22046eb8c92172217068 WatchSource:0}: Error finding container 4e89894ff8294e540a97b0a016a2d820426677e7cc9b22046eb8c92172217068: Status 404 returned error can't find the container with id 4e89894ff8294e540a97b0a016a2d820426677e7cc9b22046eb8c92172217068
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-270339 -n multinode-270339
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-270339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.903481874.exe start -p running-upgrade-706449 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.903481874.exe start -p running-upgrade-706449 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m17.486424071s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-706449 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-706449 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.173057182s)

                                                
                                                
-- stdout --
	* [running-upgrade-706449] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-706449 in cluster running-upgrade-706449
	* Pulling base image ...
	* Updating the running docker "running-upgrade-706449" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:08:04.511030 1239921 out.go:296] Setting OutFile to fd 1 ...
	I1212 01:08:04.511300 1239921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 01:08:04.511322 1239921 out.go:309] Setting ErrFile to fd 2...
	I1212 01:08:04.511341 1239921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 01:08:04.511601 1239921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 01:08:04.512005 1239921 out.go:303] Setting JSON to false
	I1212 01:08:04.513894 1239921 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28231,"bootTime":1702315054,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 01:08:04.514034 1239921 start.go:138] virtualization:  
	I1212 01:08:04.518153 1239921 out.go:177] * [running-upgrade-706449] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 01:08:04.520121 1239921 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 01:08:04.520198 1239921 notify.go:220] Checking for updates...
	I1212 01:08:04.520100 1239921 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1212 01:08:04.530048 1239921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:08:04.532350 1239921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 01:08:04.534391 1239921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 01:08:04.536302 1239921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:08:04.538165 1239921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:08:04.540193 1239921 config.go:182] Loaded profile config "running-upgrade-706449": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1212 01:08:04.542474 1239921 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 01:08:04.544514 1239921 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 01:08:04.596179 1239921 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 01:08:04.596285 1239921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:08:04.793291 1239921 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-12 01:08:04.778098313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 01:08:04.793394 1239921 docker.go:295] overlay module found
	I1212 01:08:04.795585 1239921 out.go:177] * Using the docker driver based on existing profile
	I1212 01:08:04.797611 1239921 start.go:298] selected driver: docker
	I1212 01:08:04.797632 1239921 start.go:902] validating driver "docker" against &{Name:running-upgrade-706449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-706449 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.146 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 01:08:04.797720 1239921 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:08:04.798354 1239921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:08:04.894243 1239921 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1212 01:08:04.985439 1239921 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-12 01:08:04.965783327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 01:08:04.985819 1239921 cni.go:84] Creating CNI manager for ""
	I1212 01:08:04.985842 1239921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:08:04.985856 1239921 start_flags.go:323] config:
	{Name:running-upgrade-706449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-706449 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.146 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 01:08:04.988977 1239921 out.go:177] * Starting control plane node running-upgrade-706449 in cluster running-upgrade-706449
	I1212 01:08:04.990923 1239921 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 01:08:04.993579 1239921 out.go:177] * Pulling base image ...
	I1212 01:08:04.995547 1239921 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1212 01:08:04.995700 1239921 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1212 01:08:05.028369 1239921 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1212 01:08:05.028394 1239921 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1212 01:08:05.074267 1239921 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1212 01:08:05.074414 1239921 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/running-upgrade-706449/config.json ...
	I1212 01:08:05.074666 1239921 cache.go:194] Successfully downloaded all kic artifacts
	I1212 01:08:05.074711 1239921 start.go:365] acquiring machines lock for running-upgrade-706449: {Name:mkd7ac0d510d4173ed584acda0c7d5e74c342f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:08:05.074780 1239921 start.go:369] acquired machines lock for "running-upgrade-706449" in 47.851µs
	I1212 01:08:05.074793 1239921 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:08:05.074799 1239921 fix.go:54] fixHost starting: 
	I1212 01:08:05.075085 1239921 cli_runner.go:164] Run: docker container inspect running-upgrade-706449 --format={{.State.Status}}
	I1212 01:08:05.075350 1239921 cache.go:107] acquiring lock: {Name:mk71819f230f97467ec9647c6a082f5eae8154b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:08:05.075429 1239921 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 01:08:05.075441 1239921 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 93.281µs
	I1212 01:08:05.075458 1239921 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 01:08:05.075469 1239921 cache.go:107] acquiring lock: {Name:mk3397fd1d4cc2fb6786d8e96c84feef7675727c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:08:05.075506 1239921 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1212 01:08:05.075511 1239921 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 45.062µs
	I1212 01:08:05.075518 1239921 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1212 01:08:05.075526 1239921 cache.go:107] acquiring lock: {Name:mk3bbe29f60dcf38772340b89c5e1658f8d68b9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:08:05.075552 1239921 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1212 01:08:05.075557 1239921 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.851µs
	I1212 01:08:05.075563 1239921 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1212 01:08:05.075583 1239921 cache.go:107] acquiring lock: {Name:mkd0a1d420a8f75b003208757daa7be30d358be0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:08:05.075609 1239921 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1212 01:08:05.075613 1239921 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 31.409µs
	I1212 01:08:05.075619 1239921 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1212 01:08:05.075629 1239921 cache.go:107] acquiring lock: {Name:mk9dae40bd3cfb60b0c3817b3768f1c355aec93a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:08:05.075652 1239921 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1212 01:08:05.075657 1239921 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 29.267µs
	I1212 01:08:05.075663 1239921 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1212 01:08:05.075672 1239921 cache.go:107] acquiring lock: {Name:mk304519857cdb668bd40c36d0c6438db21d0fdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:08:05.075696 1239921 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1212 01:08:05.075703 1239921 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 30.858µs
	I1212 01:08:05.075709 1239921 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1212 01:08:05.075717 1239921 cache.go:107] acquiring lock: {Name:mkba6503fa5665e88c24879cad4c829b28e1067b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:08:05.075740 1239921 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1212 01:08:05.075745 1239921 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 28.963µs
	I1212 01:08:05.075750 1239921 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1212 01:08:05.075761 1239921 cache.go:107] acquiring lock: {Name:mk0095d89eada2ba74f81f7fdd75395ecacb654f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:08:05.075788 1239921 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1212 01:08:05.075793 1239921 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 32.352µs
	I1212 01:08:05.075798 1239921 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1212 01:08:05.075803 1239921 cache.go:87] Successfully saved all images to host disk.
	I1212 01:08:05.097145 1239921 fix.go:102] recreateIfNeeded on running-upgrade-706449: state=Running err=<nil>
	W1212 01:08:05.097177 1239921 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 01:08:05.101214 1239921 out.go:177] * Updating the running docker "running-upgrade-706449" container ...
	I1212 01:08:05.103340 1239921 machine.go:88] provisioning docker machine ...
	I1212 01:08:05.103372 1239921 ubuntu.go:169] provisioning hostname "running-upgrade-706449"
	I1212 01:08:05.103451 1239921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-706449
	I1212 01:08:05.129058 1239921 main.go:141] libmachine: Using SSH client type: native
	I1212 01:08:05.129552 1239921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34196 <nil> <nil>}
	I1212 01:08:05.129568 1239921 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-706449 && echo "running-upgrade-706449" | sudo tee /etc/hostname
	I1212 01:08:05.295591 1239921 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-706449
	
	I1212 01:08:05.295747 1239921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-706449
	I1212 01:08:05.319808 1239921 main.go:141] libmachine: Using SSH client type: native
	I1212 01:08:05.320202 1239921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34196 <nil> <nil>}
	I1212 01:08:05.320220 1239921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-706449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-706449/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-706449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:08:05.468421 1239921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:08:05.468463 1239921 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 01:08:05.468482 1239921 ubuntu.go:177] setting up certificates
	I1212 01:08:05.468499 1239921 provision.go:83] configureAuth start
	I1212 01:08:05.468571 1239921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-706449
	I1212 01:08:05.500817 1239921 provision.go:138] copyHostCerts
	I1212 01:08:05.500888 1239921 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem, removing ...
	I1212 01:08:05.500917 1239921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 01:08:05.501002 1239921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 01:08:05.501105 1239921 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem, removing ...
	I1212 01:08:05.501117 1239921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 01:08:05.501144 1239921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 01:08:05.501206 1239921 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem, removing ...
	I1212 01:08:05.501215 1239921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 01:08:05.501240 1239921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 01:08:05.501313 1239921 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-706449 san=[192.168.70.146 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-706449]
	I1212 01:08:05.826541 1239921 provision.go:172] copyRemoteCerts
	I1212 01:08:05.826615 1239921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:08:05.826664 1239921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-706449
	I1212 01:08:05.845093 1239921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/running-upgrade-706449/id_rsa Username:docker}
	I1212 01:08:05.948694 1239921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:08:05.975896 1239921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 01:08:06.005401 1239921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:08:06.034714 1239921 provision.go:86] duration metric: configureAuth took 566.189621ms
	I1212 01:08:06.034761 1239921 ubuntu.go:193] setting minikube options for container-runtime
	I1212 01:08:06.034966 1239921 config.go:182] Loaded profile config "running-upgrade-706449": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1212 01:08:06.035104 1239921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-706449
	I1212 01:08:06.057383 1239921 main.go:141] libmachine: Using SSH client type: native
	I1212 01:08:06.057866 1239921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34196 <nil> <nil>}
	I1212 01:08:06.057894 1239921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:08:06.689436 1239921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:08:06.689463 1239921 machine.go:91] provisioned docker machine in 1.586106786s
	I1212 01:08:06.689474 1239921 start.go:300] post-start starting for "running-upgrade-706449" (driver="docker")
	I1212 01:08:06.689485 1239921 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:08:06.689572 1239921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:08:06.689625 1239921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-706449
	I1212 01:08:06.713071 1239921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/running-upgrade-706449/id_rsa Username:docker}
	I1212 01:08:06.811130 1239921 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:08:06.815524 1239921 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 01:08:06.815592 1239921 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:08:06.815618 1239921 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 01:08:06.815639 1239921 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1212 01:08:06.815674 1239921 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 01:08:06.815762 1239921 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 01:08:06.815900 1239921 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> 11173832.pem in /etc/ssl/certs
	I1212 01:08:06.816056 1239921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:08:06.825442 1239921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 01:08:06.850046 1239921 start.go:303] post-start completed in 160.556126ms
	I1212 01:08:06.850170 1239921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:08:06.850252 1239921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-706449
	I1212 01:08:06.874938 1239921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/running-upgrade-706449/id_rsa Username:docker}
	I1212 01:08:06.974365 1239921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:08:06.981343 1239921 fix.go:56] fixHost completed within 1.906537746s
	I1212 01:08:06.981365 1239921 start.go:83] releasing machines lock for "running-upgrade-706449", held for 1.906576596s
	I1212 01:08:06.981445 1239921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-706449
	I1212 01:08:07.003804 1239921 ssh_runner.go:195] Run: cat /version.json
	I1212 01:08:07.003857 1239921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-706449
	I1212 01:08:07.004127 1239921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:08:07.004183 1239921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-706449
	I1212 01:08:07.024281 1239921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/running-upgrade-706449/id_rsa Username:docker}
	I1212 01:08:07.026068 1239921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/running-upgrade-706449/id_rsa Username:docker}
	W1212 01:08:07.121487 1239921 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1212 01:08:07.121564 1239921 ssh_runner.go:195] Run: systemctl --version
	I1212 01:08:07.243735 1239921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:08:07.402602 1239921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 01:08:07.408660 1239921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:08:07.433338 1239921 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 01:08:07.433431 1239921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:08:07.471353 1239921 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:08:07.471376 1239921 start.go:475] detecting cgroup driver to use...
	I1212 01:08:07.471430 1239921 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 01:08:07.471504 1239921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	W1212 01:08:07.516768 1239921 cruntime.go:290] disable failed: sudo systemctl stop -f containerd: Process exited with status 1
	stdout:
	
	stderr:
	Job for containerd.service canceled.
	I1212 01:08:07.516942 1239921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	W1212 01:08:07.534941 1239921 crio.go:202] disableOthers: containerd is still active
	I1212 01:08:07.535068 1239921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:08:07.554333 1239921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 01:08:07.554425 1239921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:08:07.577848 1239921 out.go:177] 
	W1212 01:08:07.579879 1239921 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1212 01:08:07.579901 1239921 out.go:239] * 
	* 
	W1212 01:08:07.580947 1239921 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:08:07.583508 1239921 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-706449 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-12 01:08:07.618755768 +0000 UTC m=+3430.973742133
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-706449
helpers_test.go:235: (dbg) docker inspect running-upgrade-706449:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a0cb96c817db6230967ce85e3bce1d81fdabfa9c6969d8466853bd0259d5ae0d",
	        "Created": "2023-12-12T01:07:15.067366373Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1236392,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T01:07:15.484333155Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/a0cb96c817db6230967ce85e3bce1d81fdabfa9c6969d8466853bd0259d5ae0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a0cb96c817db6230967ce85e3bce1d81fdabfa9c6969d8466853bd0259d5ae0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a0cb96c817db6230967ce85e3bce1d81fdabfa9c6969d8466853bd0259d5ae0d/hosts",
	        "LogPath": "/var/lib/docker/containers/a0cb96c817db6230967ce85e3bce1d81fdabfa9c6969d8466853bd0259d5ae0d/a0cb96c817db6230967ce85e3bce1d81fdabfa9c6969d8466853bd0259d5ae0d-json.log",
	        "Name": "/running-upgrade-706449",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-706449:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-706449",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/37e07ffef547da4317bc47d3c5a7602601d127f69360cc66c3883f528eece713-init/diff:/var/lib/docker/overlay2/fb4d70b77c8563911ee6b27f4a1db561c619dc8a7965acb41ef3fb4bdff503be/diff:/var/lib/docker/overlay2/86041beeed99538e1b99d40188f9672ba1a7b693ead838775e51b9a3510d7004/diff:/var/lib/docker/overlay2/5fa8514ae4f8b5fabb33f5572ef3fc88b93d58bfa6187659052d80a45eded3af/diff:/var/lib/docker/overlay2/98997cc40b18e4c7f99d86ca89eca96db185976a8229b827c48037b3511b21ef/diff:/var/lib/docker/overlay2/8e633cc4768e89a51bf1ec47b536188f1cd975cbaf3cac0637321afa68d8975f/diff:/var/lib/docker/overlay2/dc22798f3db1a9be08fd20b00b1d80e7bb203ce5ada74ac693db694fa06b2cd6/diff:/var/lib/docker/overlay2/68ac3a76400353310052d55cc6e37bd5540e2d6f10fc56468e116f688f921bfd/diff:/var/lib/docker/overlay2/773851fafd814eb69397b50069db10269c06dc68fda1c986413992826215cd1d/diff:/var/lib/docker/overlay2/c0d63c0a96269ecb94e73f83ce87f7695a6a59e9308ab0baaf1da65aedb051b3/diff:/var/lib/docker/overlay2/845d7b
c9c7dac22cf309ee6c9e992a083190b0c0fcdf8e978c338c8dea10935d/diff:/var/lib/docker/overlay2/0fe9f3fadf7c0afec8d8354cff788c1a00e580021a84e2a1c35518fa89c1989f/diff:/var/lib/docker/overlay2/2bc2871676a297358fbbba96cb75f1f09f37dd50953620bf3a41f857ad0956a0/diff:/var/lib/docker/overlay2/2b11188e983f66cf966ebb4b3e29de020acfdfb78f4f42157835e23e0d32f64a/diff:/var/lib/docker/overlay2/1b6affb2260190abbbc754fb997ae73a40e6d547281c33abe392212766f8182a/diff:/var/lib/docker/overlay2/b37fd8e98065e8cd178447cdd01670dbc5b1b51610175fb056b405d85bf89381/diff:/var/lib/docker/overlay2/dd56c5aeb97575fb22f9777c627e96f411d8a56379da68abad79576b62cc9e7f/diff:/var/lib/docker/overlay2/db10d052d823b9ab3a5aca8eb7443a2c237420e3be080c618924ad8de6a82c1a/diff:/var/lib/docker/overlay2/244a5571dbc6849ccd78f48209e88a3596a55850e61d74aca2560d092d5fd64e/diff:/var/lib/docker/overlay2/9edd140b9c83fa11d5ee9ef02c61078cb869cf812357ba31a69a3b71481225b7/diff:/var/lib/docker/overlay2/8165ad9ab11354f0a63b6943e90be2fd3047cf8aeff7e88fa3bbe792e902d545/diff:/var/lib/d
ocker/overlay2/724e5859245573f0dd2cfd669dd54bdeb2184bee76a2c4a358149700c19c9afe/diff:/var/lib/docker/overlay2/5cbf50571e83a50e4900e8789997c3cf5751b5db2edd146e06b4394344768b01/diff:/var/lib/docker/overlay2/f0e5148a9995af353d9a8218630c98d83a09070ec19c884e493290aa9450f1aa/diff:/var/lib/docker/overlay2/6bb06718328c8876e36d46c77dc9e1fa6c591dec6f731269e496bc386e50082d/diff:/var/lib/docker/overlay2/b89a76e27377efdeaa94f38000311e12bc5b2e7e1c01ce75aa3fdcc1f94b0357/diff:/var/lib/docker/overlay2/33670fd67dff84f14c9f0b05ec0488760e7de910c162dbb841988825840bd5fa/diff:/var/lib/docker/overlay2/d186a153db1787878ee9e8e650f5d5c97ffa053d2eaba48a0a56ececa7e91b21/diff:/var/lib/docker/overlay2/1e473dbb13fa50cad825a9d2e59c969ea3365fd85d98217586d6d9f35b9c9d8c/diff:/var/lib/docker/overlay2/b31e3c4849ff09b980a0a0a7b60ddc82fefd059ee287b9b0ca6fe56ad459cbb9/diff:/var/lib/docker/overlay2/123ad757fb9be19b516a9f0f271570fd2c8b7b3ee3e064246cd946cf672b6df9/diff:/var/lib/docker/overlay2/8827fdaf504bfc123666bb014bb2f5ec5035714dd64b88a7aaf39adef16
d5450/diff:/var/lib/docker/overlay2/b594e39b8597168d620d39b8268c8f3bf1ad7d1c38c7ec7d96ce895b4cbd040c/diff:/var/lib/docker/overlay2/d72354069a4087a4614b497cddaa9f1de7bf3f16a8c9b4b3ff9f257793caa4cd/diff:/var/lib/docker/overlay2/67d8d6204dc15553eb76db847a2d92f3dd32ce028d307e3e03b8ba742c824c6d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37e07ffef547da4317bc47d3c5a7602601d127f69360cc66c3883f528eece713/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37e07ffef547da4317bc47d3c5a7602601d127f69360cc66c3883f528eece713/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37e07ffef547da4317bc47d3c5a7602601d127f69360cc66c3883f528eece713/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-706449",
	                "Source": "/var/lib/docker/volumes/running-upgrade-706449/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-706449",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-706449",
	                "name.minikube.sigs.k8s.io": "running-upgrade-706449",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a331aadb6df0fb32e147bdfba16f6052679f9e217aceb1650e7d7fa3e4875156",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a331aadb6df0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-706449": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.146"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a0cb96c817db",
	                        "running-upgrade-706449"
	                    ],
	                    "NetworkID": "c28e0abbef1625aa7eb08cd793ae76e35f63142208d096e8a59651bd73569f33",
	                    "EndpointID": "fcc2f4ddf25a735b52aa5cdec29cce5c32059aad99dbe893f8d1960ca6f06a4b",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.146",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:92",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-706449 -n running-upgrade-706449
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-706449 -n running-upgrade-706449: exit status 4 (562.830731ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:08:08.126504 1240459 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-706449" does not appear in /home/jenkins/minikube-integration/17764-1111943/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-706449" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-706449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-706449
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-706449: (3.019874737s)
--- FAIL: TestRunningBinaryUpgrade (85.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (178.15s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.3460457101.exe start -p missing-upgrade-140841 --memory=2200 --driver=docker  --container-runtime=crio
E1212 01:02:34.050074 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.3460457101.exe start -p missing-upgrade-140841 --memory=2200 --driver=docker  --container-runtime=crio: (2m14.182609906s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-140841
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-140841: (1.729324777s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-140841
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-140841 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-140841 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (38.934114139s)

                                                
                                                
-- stdout --
	* [missing-upgrade-140841] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-140841 in cluster missing-upgrade-140841
	* Pulling base image ...
	* docker "missing-upgrade-140841" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:04:21.887750 1226203 out.go:296] Setting OutFile to fd 1 ...
	I1212 01:04:21.887967 1226203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 01:04:21.887981 1226203 out.go:309] Setting ErrFile to fd 2...
	I1212 01:04:21.887988 1226203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 01:04:21.888228 1226203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 01:04:21.888583 1226203 out.go:303] Setting JSON to false
	I1212 01:04:21.889594 1226203 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28008,"bootTime":1702315054,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 01:04:21.889662 1226203 start.go:138] virtualization:  
	I1212 01:04:21.895502 1226203 out.go:177] * [missing-upgrade-140841] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 01:04:21.897482 1226203 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 01:04:21.897549 1226203 notify.go:220] Checking for updates...
	I1212 01:04:21.899567 1226203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:04:21.901700 1226203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 01:04:21.903634 1226203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 01:04:21.905787 1226203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:04:21.908071 1226203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:04:21.910752 1226203 config.go:182] Loaded profile config "missing-upgrade-140841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1212 01:04:21.913394 1226203 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 01:04:21.915394 1226203 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 01:04:21.939572 1226203 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 01:04:21.939689 1226203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:04:22.022132 1226203 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-12-12 01:04:22.008811955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 01:04:22.022239 1226203 docker.go:295] overlay module found
	I1212 01:04:22.024713 1226203 out.go:177] * Using the docker driver based on existing profile
	I1212 01:04:22.026606 1226203 start.go:298] selected driver: docker
	I1212 01:04:22.026623 1226203 start.go:902] validating driver "docker" against &{Name:missing-upgrade-140841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-140841 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.42 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 01:04:22.026770 1226203 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:04:22.027397 1226203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:04:22.092984 1226203 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-12-12 01:04:22.083376594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 01:04:22.093396 1226203 cni.go:84] Creating CNI manager for ""
	I1212 01:04:22.093417 1226203 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:04:22.093431 1226203 start_flags.go:323] config:
	{Name:missing-upgrade-140841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-140841 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.42 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 01:04:22.096018 1226203 out.go:177] * Starting control plane node missing-upgrade-140841 in cluster missing-upgrade-140841
	I1212 01:04:22.098120 1226203 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 01:04:22.100018 1226203 out.go:177] * Pulling base image ...
	I1212 01:04:22.101969 1226203 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1212 01:04:22.102051 1226203 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1212 01:04:22.120621 1226203 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1212 01:04:22.120826 1226203 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1212 01:04:22.121284 1226203 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1212 01:04:22.175431 1226203 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1212 01:04:22.175644 1226203 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/missing-upgrade-140841/config.json ...
	I1212 01:04:22.175703 1226203 cache.go:107] acquiring lock: {Name:mk71819f230f97467ec9647c6a082f5eae8154b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:04:22.175787 1226203 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 01:04:22.175798 1226203 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.683µs
	I1212 01:04:22.175807 1226203 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 01:04:22.175817 1226203 cache.go:107] acquiring lock: {Name:mk3397fd1d4cc2fb6786d8e96c84feef7675727c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:04:22.175902 1226203 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1212 01:04:22.175929 1226203 cache.go:107] acquiring lock: {Name:mk9dae40bd3cfb60b0c3817b3768f1c355aec93a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:04:22.176019 1226203 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1212 01:04:22.176194 1226203 cache.go:107] acquiring lock: {Name:mk3bbe29f60dcf38772340b89c5e1658f8d68b9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:04:22.176248 1226203 cache.go:107] acquiring lock: {Name:mk304519857cdb668bd40c36d0c6438db21d0fdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:04:22.176303 1226203 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1212 01:04:22.176351 1226203 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1212 01:04:22.176411 1226203 cache.go:107] acquiring lock: {Name:mkd0a1d420a8f75b003208757daa7be30d358be0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:04:22.176523 1226203 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1212 01:04:22.176901 1226203 cache.go:107] acquiring lock: {Name:mk0095d89eada2ba74f81f7fdd75395ecacb654f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:04:22.177016 1226203 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 01:04:22.177208 1226203 cache.go:107] acquiring lock: {Name:mkba6503fa5665e88c24879cad4c829b28e1067b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:04:22.177615 1226203 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1212 01:04:22.177934 1226203 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:04:22.178627 1226203 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1212 01:04:22.179031 1226203 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:04:22.179240 1226203 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1212 01:04:22.179302 1226203 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1212 01:04:22.179522 1226203 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 01:04:22.179764 1226203 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	W1212 01:04:22.498045 1226203 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1212 01:04:22.498151 1226203 cache.go:162] opening:  /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1212 01:04:22.530937 1226203 cache.go:162] opening:  /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W1212 01:04:22.562336 1226203 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1212 01:04:22.562442 1226203 cache.go:162] opening:  /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W1212 01:04:22.568429 1226203 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1212 01:04:22.568547 1226203 cache.go:162] opening:  /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1212 01:04:22.576748 1226203 cache.go:162] opening:  /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1212 01:04:22.577449 1226203 cache.go:162] opening:  /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1212 01:04:22.595750 1226203 cache.go:162] opening:  /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I1212 01:04:22.687281 1226203 cache.go:157] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1212 01:04:22.687357 1226203 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 511.111236ms
	I1212 01:04:22.687382 1226203 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I1212 01:04:23.017524 1226203 cache.go:157] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1212 01:04:23.017599 1226203 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 840.714674ms
	I1212 01:04:23.017628 1226203 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1212 01:04:23.065963 1226203 cache.go:157] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1212 01:04:23.066034 1226203 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 889.623793ms
	I1212 01:04:23.066054 1226203 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  5.30 MiB / 287.99 MiB [>_] 1.84% ? p/s ?I1212 01:04:23.219610 1226203 cache.go:157] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1212 01:04:23.219680 1226203 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.043860543s
	I1212 01:04:23.219705 1226203 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  8.61 MiB / 287.99 MiB  2.99% 14.61 MiB p    > gcr.io/k8s-minikube/kicbase...:  16.02 MiB / 287.99 MiB  5.56% 14.61 MiB I1212 01:04:23.720724 1226203 cache.go:157] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1212 01:04:23.720754 1226203 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.54456276s
	I1212 01:04:23.720767 1226203 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 14.61 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 15.53 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 15.53 MiB     > gcr.io/k8s-minikube/kicbase...:  27.44 MiB / 287.99 MiB  9.53% 15.53 MiB I1212 01:04:24.502063 1226203 cache.go:157] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1212 01:04:24.502143 1226203 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.326216448s
	I1212 01:04:24.502170 1226203 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  41.63 MiB / 287.99 MiB  14.45% 16.22 MiB    > gcr.io/k8s-minikube/kicbase...:  43.95 MiB / 287.99 MiB  15.26% 16.22 MiB    > gcr.io/k8s-minikube/kicbase...:  57.12 MiB / 287.99 MiB  19.83% 16.22 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 17.98 MiB    > gcr.io/k8s-minikube/kicbase...:  68.25 MiB / 287.99 MiB  23.70% 17.98 MiB    > gcr.io/k8s-minikube/kicbase...:  75.79 MiB / 287.99 MiB  26.32% 17.98 MiB    > gcr.io/k8s-minikube/kicbase...:  82.89 MiB / 287.99 MiB  28.78% 18.45 MiB    > gcr.io/k8s-minikube/kicbase...:  95.13 MiB / 287.99 MiB  33.03% 18.45 MiB    > gcr.io/k8s-minikube/kicbase...:  103.47 MiB / 287.99 MiB  35.93% 18.45 Mi    > gcr.io/k8s-minikube/kicbase...:  116.98 MiB / 287.99 MiB  40.62% 20.92 Mi    > gcr.io/k8s-minikube/kicbase...:  130.01 MiB / 287.99 MiB  45.14% 20.92 Mi    > gcr.io/k8s-minikube/kicbase...:  143.40 MiB / 287.99 MiB  49.79% 20.92 Mi    > gcr.io/k8s-minikube/kicbase...:  156.56 MiB / 287.99 MiB  54.
36% 23.82 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 23.82 Mi    > gcr.io/k8s-minikube/kicbase...:  171.80 MiB / 287.99 MiB  59.66% 23.82 MiI1212 01:04:27.527749 1226203 cache.go:157] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1212 01:04:27.527834 1226203 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 5.350632096s
	I1212 01:04:27.527860 1226203 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1212 01:04:27.527914 1226203 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  187.73 MiB / 287.99 MiB  65.18% 25.64 Mi    > gcr.io/k8s-minikube/kicbase...:  196.79 MiB / 287.99 MiB  68.33% 25.64 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 25.64 Mi    > gcr.io/k8s-minikube/kicbase...:  213.84 MiB / 287.99 MiB  74.25% 26.79 Mi    > gcr.io/k8s-minikube/kicbase...:  221.48 MiB / 287.99 MiB  76.91% 26.79 Mi    > gcr.io/k8s-minikube/kicbase...:  233.68 MiB / 287.99 MiB  81.14% 26.79 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 27.67 Mi    > gcr.io/k8s-minikube/kicbase...:  247.55 MiB / 287.99 MiB  85.96% 27.67 Mi    > gcr.io/k8s-minikube/kicbase...:  263.15 MiB / 287.99 MiB  91.38% 27.67 Mi    > gcr.io/k8s-minikube/kicbase...:  265.10 MiB / 287.99 MiB  92.05% 28.79 Mi    > gcr.io/k8s-minikube/kicbase...:  273.80 MiB / 287.99 MiB  95.07% 28.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 28.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.
99% 29.39 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 29.39 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 29.39 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 27.50 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 27.50 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 27.50 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 34.65 MI1212 01:04:31.100120 1226203 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1212 01:04:31.100142 1226203 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1212 01:04:32.061584 1226203 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1212 01:04:32.061620 1226203 cache.go:194] Successfully downloaded all kic artifacts
	I1212 01:04:32.061691 1226203 start.go:365] acquiring machines lock for missing-upgrade-140841: {Name:mkda5e30146aee84698a1c2a745cdf473faa73ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:04:32.061757 1226203 start.go:369] acquired machines lock for "missing-upgrade-140841" in 43.666µs
	I1212 01:04:32.061789 1226203 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:04:32.061800 1226203 fix.go:54] fixHost starting: 
	I1212 01:04:32.062067 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:32.078627 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	I1212 01:04:32.078684 1226203 fix.go:102] recreateIfNeeded on missing-upgrade-140841: state= err=unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:32.078704 1226203 fix.go:107] machineExists: false. err=machine does not exist
	I1212 01:04:32.081927 1226203 out.go:177] * docker "missing-upgrade-140841" container is missing, will recreate.
	I1212 01:04:32.084134 1226203 delete.go:124] DEMOLISHING missing-upgrade-140841 ...
	I1212 01:04:32.084224 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:32.101187 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	W1212 01:04:32.101275 1226203 stop.go:75] unable to get state: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:32.101302 1226203 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:32.101772 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:32.119029 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	I1212 01:04:32.119092 1226203 delete.go:82] Unable to get host status for missing-upgrade-140841, assuming it has already been deleted: state: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:32.119164 1226203 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-140841
	W1212 01:04:32.136218 1226203 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-140841 returned with exit code 1
	I1212 01:04:32.136255 1226203 kic.go:371] could not find the container missing-upgrade-140841 to remove it. will try anyways
	I1212 01:04:32.136313 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:32.156492 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	W1212 01:04:32.156550 1226203 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:32.156617 1226203 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-140841 /bin/bash -c "sudo init 0"
	W1212 01:04:32.173436 1226203 cli_runner.go:211] docker exec --privileged -t missing-upgrade-140841 /bin/bash -c "sudo init 0" returned with exit code 1
	I1212 01:04:32.173482 1226203 oci.go:650] error shutdown missing-upgrade-140841: docker exec --privileged -t missing-upgrade-140841 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:33.173688 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:33.190586 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	I1212 01:04:33.190647 1226203 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:33.190664 1226203 oci.go:664] temporary error: container missing-upgrade-140841 status is  but expect it to be exited
	I1212 01:04:33.190701 1226203 retry.go:31] will retry after 558.702326ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:33.750505 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:33.766811 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	I1212 01:04:33.766872 1226203 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:33.766891 1226203 oci.go:664] temporary error: container missing-upgrade-140841 status is  but expect it to be exited
	I1212 01:04:33.766916 1226203 retry.go:31] will retry after 1.015139067s: couldn't verify container is exited. %v: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:34.782459 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:34.801700 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	I1212 01:04:34.801769 1226203 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:34.801783 1226203 oci.go:664] temporary error: container missing-upgrade-140841 status is  but expect it to be exited
	I1212 01:04:34.801809 1226203 retry.go:31] will retry after 574.890846ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:35.377610 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:35.403338 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	I1212 01:04:35.403399 1226203 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:35.403412 1226203 oci.go:664] temporary error: container missing-upgrade-140841 status is  but expect it to be exited
	I1212 01:04:35.403439 1226203 retry.go:31] will retry after 1.017314476s: couldn't verify container is exited. %v: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:36.421431 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:36.446639 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	I1212 01:04:36.446706 1226203 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:36.446719 1226203 oci.go:664] temporary error: container missing-upgrade-140841 status is  but expect it to be exited
	I1212 01:04:36.446745 1226203 retry.go:31] will retry after 3.372212687s: couldn't verify container is exited. %v: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:39.819187 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:39.840168 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	I1212 01:04:39.840234 1226203 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:39.840249 1226203 oci.go:664] temporary error: container missing-upgrade-140841 status is  but expect it to be exited
	I1212 01:04:39.840274 1226203 retry.go:31] will retry after 3.686669834s: couldn't verify container is exited. %v: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:43.527903 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:43.557362 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	I1212 01:04:43.557434 1226203 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:43.557443 1226203 oci.go:664] temporary error: container missing-upgrade-140841 status is  but expect it to be exited
	I1212 01:04:43.557466 1226203 retry.go:31] will retry after 5.321927028s: couldn't verify container is exited. %v: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:48.880437 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:48.905693 1226203 cli_runner.go:211] docker container inspect missing-upgrade-140841 --format={{.State.Status}} returned with exit code 1
	I1212 01:04:48.905753 1226203 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	I1212 01:04:48.905765 1226203 oci.go:664] temporary error: container missing-upgrade-140841 status is  but expect it to be exited
	I1212 01:04:48.905805 1226203 oci.go:88] couldn't shut down missing-upgrade-140841 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-140841": docker container inspect missing-upgrade-140841 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-140841
	 
	I1212 01:04:48.905872 1226203 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-140841
	I1212 01:04:48.930620 1226203 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-140841
	W1212 01:04:48.956259 1226203 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-140841 returned with exit code 1
	I1212 01:04:48.956348 1226203 cli_runner.go:164] Run: docker network inspect missing-upgrade-140841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:04:48.979048 1226203 cli_runner.go:164] Run: docker network rm missing-upgrade-140841
	I1212 01:04:49.114709 1226203 fix.go:114] Sleeping 1 second for extra luck!
	I1212 01:04:50.114792 1226203 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:04:50.119145 1226203 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1212 01:04:50.119307 1226203 start.go:159] libmachine.API.Create for "missing-upgrade-140841" (driver="docker")
	I1212 01:04:50.119328 1226203 client.go:168] LocalClient.Create starting
	I1212 01:04:50.119398 1226203 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem
	I1212 01:04:50.119433 1226203 main.go:141] libmachine: Decoding PEM data...
	I1212 01:04:50.119447 1226203 main.go:141] libmachine: Parsing certificate...
	I1212 01:04:50.119505 1226203 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem
	I1212 01:04:50.119521 1226203 main.go:141] libmachine: Decoding PEM data...
	I1212 01:04:50.119532 1226203 main.go:141] libmachine: Parsing certificate...
	I1212 01:04:50.120359 1226203 cli_runner.go:164] Run: docker network inspect missing-upgrade-140841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:04:50.153928 1226203 cli_runner.go:211] docker network inspect missing-upgrade-140841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:04:50.154008 1226203 network_create.go:281] running [docker network inspect missing-upgrade-140841] to gather additional debugging logs...
	I1212 01:04:50.154024 1226203 cli_runner.go:164] Run: docker network inspect missing-upgrade-140841
	W1212 01:04:50.191780 1226203 cli_runner.go:211] docker network inspect missing-upgrade-140841 returned with exit code 1
	I1212 01:04:50.191809 1226203 network_create.go:284] error running [docker network inspect missing-upgrade-140841]: docker network inspect missing-upgrade-140841: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-140841 not found
	I1212 01:04:50.191823 1226203 network_create.go:286] output of [docker network inspect missing-upgrade-140841]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-140841 not found
	
	** /stderr **
	I1212 01:04:50.191953 1226203 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:04:50.221925 1226203 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fb49185403af IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:74:1f:5b:43} reservation:<nil>}
	I1212 01:04:50.222308 1226203 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b6f78e5fcd5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:96:51:83:75} reservation:<nil>}
	I1212 01:04:50.222596 1226203 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bfa637d37f79 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:74:a4:7f:95} reservation:<nil>}
	I1212 01:04:50.223023 1226203 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40029a0d40}
	I1212 01:04:50.223041 1226203 network_create.go:124] attempt to create docker network missing-upgrade-140841 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 01:04:50.223101 1226203 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-140841 missing-upgrade-140841
	I1212 01:04:50.314667 1226203 network_create.go:108] docker network missing-upgrade-140841 192.168.76.0/24 created
	I1212 01:04:50.314696 1226203 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-140841" container
	I1212 01:04:50.314766 1226203 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:04:50.338224 1226203 cli_runner.go:164] Run: docker volume create missing-upgrade-140841 --label name.minikube.sigs.k8s.io=missing-upgrade-140841 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:04:50.375618 1226203 oci.go:103] Successfully created a docker volume missing-upgrade-140841
	I1212 01:04:50.375705 1226203 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-140841-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-140841 --entrypoint /usr/bin/test -v missing-upgrade-140841:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1212 01:04:51.312797 1226203 oci.go:107] Successfully prepared a docker volume missing-upgrade-140841
	I1212 01:04:51.312827 1226203 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1212 01:04:51.312957 1226203 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:04:51.313075 1226203 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:04:51.459414 1226203 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-140841 --name missing-upgrade-140841 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-140841 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-140841 --network missing-upgrade-140841 --ip 192.168.76.2 --volume missing-upgrade-140841:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1212 01:04:51.955785 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Running}}
	I1212 01:04:51.987694 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	I1212 01:04:52.018286 1226203 cli_runner.go:164] Run: docker exec missing-upgrade-140841 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:04:52.100520 1226203 oci.go:144] the created container "missing-upgrade-140841" has a running status.
	I1212 01:04:52.100551 1226203 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/missing-upgrade-140841/id_rsa...
	I1212 01:04:52.761077 1226203 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/missing-upgrade-140841/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:04:52.792877 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	I1212 01:04:52.820677 1226203 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:04:52.820701 1226203 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-140841 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:04:52.912716 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	I1212 01:04:52.934916 1226203 machine.go:88] provisioning docker machine ...
	I1212 01:04:52.934950 1226203 ubuntu.go:169] provisioning hostname "missing-upgrade-140841"
	I1212 01:04:52.935014 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:52.961384 1226203 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:52.961833 1226203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34184 <nil> <nil>}
	I1212 01:04:52.961854 1226203 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-140841 && echo "missing-upgrade-140841" | sudo tee /etc/hostname
	I1212 01:04:52.962463 1226203 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56088->127.0.0.1:34184: read: connection reset by peer
	I1212 01:04:56.112801 1226203 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-140841
	
	I1212 01:04:56.112883 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:56.131277 1226203 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:56.131689 1226203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34184 <nil> <nil>}
	I1212 01:04:56.131713 1226203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-140841' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-140841/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-140841' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:04:56.270097 1226203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:04:56.270122 1226203 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 01:04:56.270141 1226203 ubuntu.go:177] setting up certificates
	I1212 01:04:56.270151 1226203 provision.go:83] configureAuth start
	I1212 01:04:56.270208 1226203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-140841
	I1212 01:04:56.287337 1226203 provision.go:138] copyHostCerts
	I1212 01:04:56.287402 1226203 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem, removing ...
	I1212 01:04:56.287410 1226203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 01:04:56.287495 1226203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 01:04:56.287581 1226203 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem, removing ...
	I1212 01:04:56.287586 1226203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 01:04:56.287610 1226203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 01:04:56.287662 1226203 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem, removing ...
	I1212 01:04:56.287667 1226203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 01:04:56.287689 1226203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 01:04:56.287730 1226203 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-140841 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-140841]
	I1212 01:04:57.035981 1226203 provision.go:172] copyRemoteCerts
	I1212 01:04:57.036073 1226203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:04:57.036122 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:57.060377 1226203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34184 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/missing-upgrade-140841/id_rsa Username:docker}
	I1212 01:04:57.163738 1226203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:04:57.190048 1226203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:04:57.213284 1226203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 01:04:57.235722 1226203 provision.go:86] duration metric: configureAuth took 965.558416ms
	I1212 01:04:57.235751 1226203 ubuntu.go:193] setting minikube options for container-runtime
	I1212 01:04:57.235928 1226203 config.go:182] Loaded profile config "missing-upgrade-140841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1212 01:04:57.236033 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:57.253696 1226203 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:57.254136 1226203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34184 <nil> <nil>}
	I1212 01:04:57.254157 1226203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:04:57.648300 1226203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:04:57.648329 1226203 machine.go:91] provisioned docker machine in 4.713389165s
	I1212 01:04:57.648339 1226203 client.go:171] LocalClient.Create took 7.529004811s
	I1212 01:04:57.648351 1226203 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-140841" took 7.529046869s
	I1212 01:04:57.648359 1226203 start.go:300] post-start starting for "missing-upgrade-140841" (driver="docker")
	I1212 01:04:57.648368 1226203 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:04:57.648435 1226203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:04:57.648480 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:57.667865 1226203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34184 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/missing-upgrade-140841/id_rsa Username:docker}
	I1212 01:04:57.766310 1226203 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:04:57.770067 1226203 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 01:04:57.770136 1226203 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:04:57.770156 1226203 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 01:04:57.770165 1226203 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1212 01:04:57.770175 1226203 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 01:04:57.770235 1226203 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 01:04:57.770320 1226203 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> 11173832.pem in /etc/ssl/certs
	I1212 01:04:57.770424 1226203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:04:57.778828 1226203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 01:04:57.801662 1226203 start.go:303] post-start completed in 153.288432ms
	I1212 01:04:57.802034 1226203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-140841
	I1212 01:04:57.819478 1226203 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/missing-upgrade-140841/config.json ...
	I1212 01:04:57.819762 1226203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:04:57.819827 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:57.838173 1226203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34184 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/missing-upgrade-140841/id_rsa Username:docker}
	I1212 01:04:57.936856 1226203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:04:57.942465 1226203 start.go:128] duration metric: createHost completed in 7.827634309s
	I1212 01:04:57.942596 1226203 cli_runner.go:164] Run: docker container inspect missing-upgrade-140841 --format={{.State.Status}}
	W1212 01:04:57.959445 1226203 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 01:04:57.959474 1226203 machine.go:88] provisioning docker machine ...
	I1212 01:04:57.959499 1226203 ubuntu.go:169] provisioning hostname "missing-upgrade-140841"
	I1212 01:04:57.959564 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:57.977558 1226203 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:57.978092 1226203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34184 <nil> <nil>}
	I1212 01:04:57.978114 1226203 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-140841 && echo "missing-upgrade-140841" | sudo tee /etc/hostname
	I1212 01:04:58.128549 1226203 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-140841
	
	I1212 01:04:58.128633 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:58.147219 1226203 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:58.147630 1226203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34184 <nil> <nil>}
	I1212 01:04:58.147654 1226203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-140841' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-140841/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-140841' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:04:58.286404 1226203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:04:58.286468 1226203 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 01:04:58.286491 1226203 ubuntu.go:177] setting up certificates
	I1212 01:04:58.286501 1226203 provision.go:83] configureAuth start
	I1212 01:04:58.286581 1226203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-140841
	I1212 01:04:58.308044 1226203 provision.go:138] copyHostCerts
	I1212 01:04:58.308112 1226203 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem, removing ...
	I1212 01:04:58.308124 1226203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 01:04:58.308197 1226203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 01:04:58.308293 1226203 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem, removing ...
	I1212 01:04:58.308303 1226203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 01:04:58.308331 1226203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 01:04:58.308409 1226203 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem, removing ...
	I1212 01:04:58.308752 1226203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 01:04:58.308807 1226203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 01:04:58.308891 1226203 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-140841 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-140841]
	I1212 01:04:58.965933 1226203 provision.go:172] copyRemoteCerts
	I1212 01:04:58.966003 1226203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:04:58.966050 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:58.983948 1226203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34184 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/missing-upgrade-140841/id_rsa Username:docker}
	I1212 01:04:59.086425 1226203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:04:59.108171 1226203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 01:04:59.129955 1226203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:04:59.151658 1226203 provision.go:86] duration metric: configureAuth took 865.127186ms
	I1212 01:04:59.151687 1226203 ubuntu.go:193] setting minikube options for container-runtime
	I1212 01:04:59.151932 1226203 config.go:182] Loaded profile config "missing-upgrade-140841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1212 01:04:59.152041 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:59.170087 1226203 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:59.170496 1226203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34184 <nil> <nil>}
	I1212 01:04:59.170518 1226203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:04:59.503252 1226203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:04:59.503292 1226203 machine.go:91] provisioned docker machine in 1.543809401s
	I1212 01:04:59.503304 1226203 start.go:300] post-start starting for "missing-upgrade-140841" (driver="docker")
	I1212 01:04:59.503315 1226203 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:04:59.503384 1226203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:04:59.503431 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:59.522449 1226203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34184 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/missing-upgrade-140841/id_rsa Username:docker}
	I1212 01:04:59.622107 1226203 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:04:59.625865 1226203 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 01:04:59.625892 1226203 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:04:59.625903 1226203 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 01:04:59.625910 1226203 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1212 01:04:59.625920 1226203 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 01:04:59.625974 1226203 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 01:04:59.626055 1226203 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> 11173832.pem in /etc/ssl/certs
	I1212 01:04:59.626162 1226203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:04:59.635083 1226203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 01:04:59.657740 1226203 start.go:303] post-start completed in 154.420542ms
	I1212 01:04:59.657837 1226203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:04:59.657881 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:59.675719 1226203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34184 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/missing-upgrade-140841/id_rsa Username:docker}
	I1212 01:04:59.775283 1226203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:04:59.780879 1226203 fix.go:56] fixHost completed within 27.719072974s
	I1212 01:04:59.780906 1226203 start.go:83] releasing machines lock for "missing-upgrade-140841", held for 27.71912944s
	I1212 01:04:59.780978 1226203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-140841
	I1212 01:04:59.799258 1226203 ssh_runner.go:195] Run: cat /version.json
	I1212 01:04:59.799318 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:59.799369 1226203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:04:59.799438 1226203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-140841
	I1212 01:04:59.820505 1226203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34184 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/missing-upgrade-140841/id_rsa Username:docker}
	I1212 01:04:59.829348 1226203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34184 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/missing-upgrade-140841/id_rsa Username:docker}
	W1212 01:04:59.917573 1226203 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1212 01:04:59.917658 1226203 ssh_runner.go:195] Run: systemctl --version
	I1212 01:05:00.064086 1226203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:05:00.191959 1226203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 01:05:00.198929 1226203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:05:00.226119 1226203 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 01:05:00.226325 1226203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:05:00.264753 1226203 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:05:00.264813 1226203 start.go:475] detecting cgroup driver to use...
	I1212 01:05:00.264866 1226203 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 01:05:00.264943 1226203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:05:00.292867 1226203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:05:00.305401 1226203 docker.go:203] disabling cri-docker service (if available) ...
	I1212 01:05:00.305506 1226203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:05:00.318017 1226203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:05:00.331006 1226203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1212 01:05:00.344789 1226203 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1212 01:05:00.344882 1226203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:05:00.452451 1226203 docker.go:219] disabling docker service ...
	I1212 01:05:00.452518 1226203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:05:00.465688 1226203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:05:00.484616 1226203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:05:00.586258 1226203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:05:00.693569 1226203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:05:00.706256 1226203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:05:00.723157 1226203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 01:05:00.723280 1226203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:05:00.735506 1226203 out.go:177] 
	W1212 01:05:00.737482 1226203 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1212 01:05:00.737498 1226203 out.go:239] * 
	* 
	W1212 01:05:00.738490 1226203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:05:00.741812 1226203 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-140841 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-12-12 01:05:00.805814173 +0000 UTC m=+3244.160800538
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-140841
helpers_test.go:235: (dbg) docker inspect missing-upgrade-140841:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "531d84d96c4a384c351aa6b521bc7ee3ce6f8f183f0b6afa1cfe76fee664fc97",
	        "Created": "2023-12-12T01:04:51.507376318Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1228370,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T01:04:51.947245403Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/531d84d96c4a384c351aa6b521bc7ee3ce6f8f183f0b6afa1cfe76fee664fc97/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/531d84d96c4a384c351aa6b521bc7ee3ce6f8f183f0b6afa1cfe76fee664fc97/hostname",
	        "HostsPath": "/var/lib/docker/containers/531d84d96c4a384c351aa6b521bc7ee3ce6f8f183f0b6afa1cfe76fee664fc97/hosts",
	        "LogPath": "/var/lib/docker/containers/531d84d96c4a384c351aa6b521bc7ee3ce6f8f183f0b6afa1cfe76fee664fc97/531d84d96c4a384c351aa6b521bc7ee3ce6f8f183f0b6afa1cfe76fee664fc97-json.log",
	        "Name": "/missing-upgrade-140841",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-140841:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-140841",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d1326de50bef3712e691abf58b76d14d5df83e9b83a64158eca3127fd846c419-init/diff:/var/lib/docker/overlay2/fb4d70b77c8563911ee6b27f4a1db561c619dc8a7965acb41ef3fb4bdff503be/diff:/var/lib/docker/overlay2/86041beeed99538e1b99d40188f9672ba1a7b693ead838775e51b9a3510d7004/diff:/var/lib/docker/overlay2/5fa8514ae4f8b5fabb33f5572ef3fc88b93d58bfa6187659052d80a45eded3af/diff:/var/lib/docker/overlay2/98997cc40b18e4c7f99d86ca89eca96db185976a8229b827c48037b3511b21ef/diff:/var/lib/docker/overlay2/8e633cc4768e89a51bf1ec47b536188f1cd975cbaf3cac0637321afa68d8975f/diff:/var/lib/docker/overlay2/dc22798f3db1a9be08fd20b00b1d80e7bb203ce5ada74ac693db694fa06b2cd6/diff:/var/lib/docker/overlay2/68ac3a76400353310052d55cc6e37bd5540e2d6f10fc56468e116f688f921bfd/diff:/var/lib/docker/overlay2/773851fafd814eb69397b50069db10269c06dc68fda1c986413992826215cd1d/diff:/var/lib/docker/overlay2/c0d63c0a96269ecb94e73f83ce87f7695a6a59e9308ab0baaf1da65aedb051b3/diff:/var/lib/docker/overlay2/845d7b
c9c7dac22cf309ee6c9e992a083190b0c0fcdf8e978c338c8dea10935d/diff:/var/lib/docker/overlay2/0fe9f3fadf7c0afec8d8354cff788c1a00e580021a84e2a1c35518fa89c1989f/diff:/var/lib/docker/overlay2/2bc2871676a297358fbbba96cb75f1f09f37dd50953620bf3a41f857ad0956a0/diff:/var/lib/docker/overlay2/2b11188e983f66cf966ebb4b3e29de020acfdfb78f4f42157835e23e0d32f64a/diff:/var/lib/docker/overlay2/1b6affb2260190abbbc754fb997ae73a40e6d547281c33abe392212766f8182a/diff:/var/lib/docker/overlay2/b37fd8e98065e8cd178447cdd01670dbc5b1b51610175fb056b405d85bf89381/diff:/var/lib/docker/overlay2/dd56c5aeb97575fb22f9777c627e96f411d8a56379da68abad79576b62cc9e7f/diff:/var/lib/docker/overlay2/db10d052d823b9ab3a5aca8eb7443a2c237420e3be080c618924ad8de6a82c1a/diff:/var/lib/docker/overlay2/244a5571dbc6849ccd78f48209e88a3596a55850e61d74aca2560d092d5fd64e/diff:/var/lib/docker/overlay2/9edd140b9c83fa11d5ee9ef02c61078cb869cf812357ba31a69a3b71481225b7/diff:/var/lib/docker/overlay2/8165ad9ab11354f0a63b6943e90be2fd3047cf8aeff7e88fa3bbe792e902d545/diff:/var/lib/d
ocker/overlay2/724e5859245573f0dd2cfd669dd54bdeb2184bee76a2c4a358149700c19c9afe/diff:/var/lib/docker/overlay2/5cbf50571e83a50e4900e8789997c3cf5751b5db2edd146e06b4394344768b01/diff:/var/lib/docker/overlay2/f0e5148a9995af353d9a8218630c98d83a09070ec19c884e493290aa9450f1aa/diff:/var/lib/docker/overlay2/6bb06718328c8876e36d46c77dc9e1fa6c591dec6f731269e496bc386e50082d/diff:/var/lib/docker/overlay2/b89a76e27377efdeaa94f38000311e12bc5b2e7e1c01ce75aa3fdcc1f94b0357/diff:/var/lib/docker/overlay2/33670fd67dff84f14c9f0b05ec0488760e7de910c162dbb841988825840bd5fa/diff:/var/lib/docker/overlay2/d186a153db1787878ee9e8e650f5d5c97ffa053d2eaba48a0a56ececa7e91b21/diff:/var/lib/docker/overlay2/1e473dbb13fa50cad825a9d2e59c969ea3365fd85d98217586d6d9f35b9c9d8c/diff:/var/lib/docker/overlay2/b31e3c4849ff09b980a0a0a7b60ddc82fefd059ee287b9b0ca6fe56ad459cbb9/diff:/var/lib/docker/overlay2/123ad757fb9be19b516a9f0f271570fd2c8b7b3ee3e064246cd946cf672b6df9/diff:/var/lib/docker/overlay2/8827fdaf504bfc123666bb014bb2f5ec5035714dd64b88a7aaf39adef16
d5450/diff:/var/lib/docker/overlay2/b594e39b8597168d620d39b8268c8f3bf1ad7d1c38c7ec7d96ce895b4cbd040c/diff:/var/lib/docker/overlay2/d72354069a4087a4614b497cddaa9f1de7bf3f16a8c9b4b3ff9f257793caa4cd/diff:/var/lib/docker/overlay2/67d8d6204dc15553eb76db847a2d92f3dd32ce028d307e3e03b8ba742c824c6d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1326de50bef3712e691abf58b76d14d5df83e9b83a64158eca3127fd846c419/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1326de50bef3712e691abf58b76d14d5df83e9b83a64158eca3127fd846c419/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1326de50bef3712e691abf58b76d14d5df83e9b83a64158eca3127fd846c419/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-140841",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-140841/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-140841",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-140841",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-140841",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1db1ea122d34dc425513efa1958ac631b2e12c025317cbd79aac5226b6c19fd5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34180"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34182"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34181"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1db1ea122d34",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-140841": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "531d84d96c4a",
	                        "missing-upgrade-140841"
	                    ],
	                    "NetworkID": "f6abc73f07b799e4fe332daab826ec0e8b52349306bd5a605bc9acb205d728ea",
	                    "EndpointID": "4b85b574663631b0b0a772685909cdcdfb87db6580e33af2a39a61bdd0752085",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-140841 -n missing-upgrade-140841
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-140841 -n missing-upgrade-140841: exit status 6 (330.60118ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:05:01.136664 1229371 status.go:415] kubeconfig endpoint: got: 192.168.59.42:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-140841" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-140841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-140841
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-140841: (1.853723882s)
--- FAIL: TestMissingContainerUpgrade (178.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (98.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.2095728274.exe start -p stopped-upgrade-966595 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.2095728274.exe start -p stopped-upgrade-966595 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m11.400607421s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.2095728274.exe -p stopped-upgrade-966595 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.2095728274.exe -p stopped-upgrade-966595 stop: (20.100370381s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-966595 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-966595 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.823101608s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-966595] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-966595 in cluster stopped-upgrade-966595
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-966595" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:06:35.879726 1233900 out.go:296] Setting OutFile to fd 1 ...
	I1212 01:06:35.879870 1233900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 01:06:35.879879 1233900 out.go:309] Setting ErrFile to fd 2...
	I1212 01:06:35.879885 1233900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 01:06:35.880123 1233900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 01:06:35.880471 1233900 out.go:303] Setting JSON to false
	I1212 01:06:35.881431 1233900 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28142,"bootTime":1702315054,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 01:06:35.881500 1233900 start.go:138] virtualization:  
	I1212 01:06:35.884438 1233900 out.go:177] * [stopped-upgrade-966595] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 01:06:35.886510 1233900 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 01:06:35.888171 1233900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:06:35.886660 1233900 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1212 01:06:35.886697 1233900 notify.go:220] Checking for updates...
	I1212 01:06:35.892266 1233900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 01:06:35.894210 1233900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 01:06:35.896012 1233900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:06:35.897841 1233900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:06:35.900311 1233900 config.go:182] Loaded profile config "stopped-upgrade-966595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1212 01:06:35.902805 1233900 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 01:06:35.904910 1233900 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 01:06:35.940575 1233900 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 01:06:35.940681 1233900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:06:36.012304 1233900 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1212 01:06:36.051598 1233900 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-12 01:06:36.041893458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 01:06:36.051709 1233900 docker.go:295] overlay module found
	I1212 01:06:36.054309 1233900 out.go:177] * Using the docker driver based on existing profile
	I1212 01:06:36.056183 1233900 start.go:298] selected driver: docker
	I1212 01:06:36.056201 1233900 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-966595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-966595 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.196 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 01:06:36.056294 1233900 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:06:36.056918 1233900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:06:36.121408 1233900 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-12 01:06:36.112247303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 01:06:36.121774 1233900 cni.go:84] Creating CNI manager for ""
	I1212 01:06:36.121797 1233900 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:06:36.121809 1233900 start_flags.go:323] config:
	{Name:stopped-upgrade-966595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-966595 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.196 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 01:06:36.124076 1233900 out.go:177] * Starting control plane node stopped-upgrade-966595 in cluster stopped-upgrade-966595
	I1212 01:06:36.126060 1233900 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 01:06:36.128103 1233900 out.go:177] * Pulling base image ...
	I1212 01:06:36.130303 1233900 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1212 01:06:36.130377 1233900 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1212 01:06:36.148127 1233900 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1212 01:06:36.148152 1233900 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1212 01:06:36.205622 1233900 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1212 01:06:36.205795 1233900 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/stopped-upgrade-966595/config.json ...
	I1212 01:06:36.205916 1233900 cache.go:107] acquiring lock: {Name:mk71819f230f97467ec9647c6a082f5eae8154b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:06:36.206008 1233900 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 01:06:36.206017 1233900 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.942µs
	I1212 01:06:36.206027 1233900 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 01:06:36.206044 1233900 cache.go:194] Successfully downloaded all kic artifacts
	I1212 01:06:36.206048 1233900 cache.go:107] acquiring lock: {Name:mk3397fd1d4cc2fb6786d8e96c84feef7675727c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:06:36.206078 1233900 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1212 01:06:36.206083 1233900 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 36.323µs
	I1212 01:06:36.206090 1233900 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1212 01:06:36.206084 1233900 start.go:365] acquiring machines lock for stopped-upgrade-966595: {Name:mka6b942d5b146a048aa21f4f8408db281a6c419 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:06:36.206098 1233900 cache.go:107] acquiring lock: {Name:mk3bbe29f60dcf38772340b89c5e1658f8d68b9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:06:36.206120 1233900 start.go:369] acquired machines lock for "stopped-upgrade-966595" in 22.81µs
	I1212 01:06:36.206125 1233900 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1212 01:06:36.206130 1233900 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 32.901µs
	I1212 01:06:36.206133 1233900 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:06:36.206136 1233900 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1212 01:06:36.206139 1233900 fix.go:54] fixHost starting: 
	I1212 01:06:36.206145 1233900 cache.go:107] acquiring lock: {Name:mkd0a1d420a8f75b003208757daa7be30d358be0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:06:36.206169 1233900 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1212 01:06:36.206174 1233900 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 29.628µs
	I1212 01:06:36.206180 1233900 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1212 01:06:36.206190 1233900 cache.go:107] acquiring lock: {Name:mk9dae40bd3cfb60b0c3817b3768f1c355aec93a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:06:36.206214 1233900 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1212 01:06:36.206220 1233900 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 33.328µs
	I1212 01:06:36.206226 1233900 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1212 01:06:36.206234 1233900 cache.go:107] acquiring lock: {Name:mk304519857cdb668bd40c36d0c6438db21d0fdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:06:36.206257 1233900 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1212 01:06:36.206262 1233900 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 28.733µs
	I1212 01:06:36.206267 1233900 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1212 01:06:36.206275 1233900 cache.go:107] acquiring lock: {Name:mkba6503fa5665e88c24879cad4c829b28e1067b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:06:36.206300 1233900 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1212 01:06:36.206305 1233900 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 30.719µs
	I1212 01:06:36.206311 1233900 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1212 01:06:36.206319 1233900 cache.go:107] acquiring lock: {Name:mk0095d89eada2ba74f81f7fdd75395ecacb654f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:06:36.206346 1233900 cache.go:115] /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1212 01:06:36.206351 1233900 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 32.828µs
	I1212 01:06:36.206357 1233900 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1212 01:06:36.206362 1233900 cache.go:87] Successfully saved all images to host disk.
	I1212 01:06:36.206396 1233900 cli_runner.go:164] Run: docker container inspect stopped-upgrade-966595 --format={{.State.Status}}
	I1212 01:06:36.223762 1233900 fix.go:102] recreateIfNeeded on stopped-upgrade-966595: state=Stopped err=<nil>
	W1212 01:06:36.223800 1233900 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 01:06:36.226124 1233900 out.go:177] * Restarting existing docker container for "stopped-upgrade-966595" ...
	I1212 01:06:36.228268 1233900 cli_runner.go:164] Run: docker start stopped-upgrade-966595
	I1212 01:06:36.560968 1233900 cli_runner.go:164] Run: docker container inspect stopped-upgrade-966595 --format={{.State.Status}}
	I1212 01:06:36.591504 1233900 kic.go:430] container "stopped-upgrade-966595" state is running.
	I1212 01:06:36.592090 1233900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-966595
	I1212 01:06:36.626981 1233900 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/stopped-upgrade-966595/config.json ...
	I1212 01:06:36.627198 1233900 machine.go:88] provisioning docker machine ...
	I1212 01:06:36.627212 1233900 ubuntu.go:169] provisioning hostname "stopped-upgrade-966595"
	I1212 01:06:36.627259 1233900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-966595
	I1212 01:06:36.653782 1233900 main.go:141] libmachine: Using SSH client type: native
	I1212 01:06:36.654201 1233900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1212 01:06:36.654216 1233900 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-966595 && echo "stopped-upgrade-966595" | sudo tee /etc/hostname
	I1212 01:06:36.654991 1233900 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59130->127.0.0.1:34192: read: connection reset by peer
	I1212 01:06:39.818012 1233900 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-966595
	
	I1212 01:06:39.818193 1233900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-966595
	I1212 01:06:39.844408 1233900 main.go:141] libmachine: Using SSH client type: native
	I1212 01:06:39.844813 1233900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1212 01:06:39.844831 1233900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-966595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-966595/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-966595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:06:39.990180 1233900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:06:39.990238 1233900 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17764-1111943/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-1111943/.minikube}
	I1212 01:06:39.990275 1233900 ubuntu.go:177] setting up certificates
	I1212 01:06:39.990298 1233900 provision.go:83] configureAuth start
	I1212 01:06:39.990378 1233900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-966595
	I1212 01:06:40.015718 1233900 provision.go:138] copyHostCerts
	I1212 01:06:40.015791 1233900 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem, removing ...
	I1212 01:06:40.015821 1233900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem
	I1212 01:06:40.015904 1233900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/cert.pem (1123 bytes)
	I1212 01:06:40.016015 1233900 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem, removing ...
	I1212 01:06:40.016020 1233900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem
	I1212 01:06:40.016047 1233900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/key.pem (1679 bytes)
	I1212 01:06:40.016110 1233900 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem, removing ...
	I1212 01:06:40.016115 1233900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem
	I1212 01:06:40.016139 1233900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.pem (1082 bytes)
	I1212 01:06:40.016195 1233900 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-966595 san=[192.168.59.196 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-966595]
	I1212 01:06:40.764275 1233900 provision.go:172] copyRemoteCerts
	I1212 01:06:40.765151 1233900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:06:40.765259 1233900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-966595
	I1212 01:06:40.813357 1233900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/stopped-upgrade-966595/id_rsa Username:docker}
	I1212 01:06:40.914406 1233900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:06:40.937353 1233900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 01:06:40.959598 1233900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:06:40.981956 1233900 provision.go:86] duration metric: configureAuth took 991.622898ms
	I1212 01:06:40.982031 1233900 ubuntu.go:193] setting minikube options for container-runtime
	I1212 01:06:40.982222 1233900 config.go:182] Loaded profile config "stopped-upgrade-966595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1212 01:06:40.982337 1233900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-966595
	I1212 01:06:41.001890 1233900 main.go:141] libmachine: Using SSH client type: native
	I1212 01:06:41.002311 1233900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be750] 0x3c0ec0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1212 01:06:41.002331 1233900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:06:41.417633 1233900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:06:41.417656 1233900 machine.go:91] provisioned docker machine in 4.790447763s
	I1212 01:06:41.417666 1233900 start.go:300] post-start starting for "stopped-upgrade-966595" (driver="docker")
	I1212 01:06:41.417677 1233900 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:06:41.417747 1233900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:06:41.417791 1233900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-966595
	I1212 01:06:41.439493 1233900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/stopped-upgrade-966595/id_rsa Username:docker}
	I1212 01:06:41.538166 1233900 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:06:41.541922 1233900 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 01:06:41.541947 1233900 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:06:41.541979 1233900 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 01:06:41.541992 1233900 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1212 01:06:41.542007 1233900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/addons for local assets ...
	I1212 01:06:41.542069 1233900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-1111943/.minikube/files for local assets ...
	I1212 01:06:41.542180 1233900 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem -> 11173832.pem in /etc/ssl/certs
	I1212 01:06:41.542292 1233900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:06:41.550966 1233900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/ssl/certs/11173832.pem --> /etc/ssl/certs/11173832.pem (1708 bytes)
	I1212 01:06:41.573141 1233900 start.go:303] post-start completed in 155.458914ms
	I1212 01:06:41.573293 1233900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:06:41.573364 1233900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-966595
	I1212 01:06:41.591435 1233900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/stopped-upgrade-966595/id_rsa Username:docker}
	I1212 01:06:41.691028 1233900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:06:41.696415 1233900 fix.go:56] fixHost completed within 5.490269127s
	I1212 01:06:41.696443 1233900 start.go:83] releasing machines lock for "stopped-upgrade-966595", held for 5.490313876s
	I1212 01:06:41.696512 1233900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-966595
	I1212 01:06:41.714667 1233900 ssh_runner.go:195] Run: cat /version.json
	I1212 01:06:41.714726 1233900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-966595
	I1212 01:06:41.714740 1233900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:06:41.714802 1233900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-966595
	I1212 01:06:41.735626 1233900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/stopped-upgrade-966595/id_rsa Username:docker}
	I1212 01:06:41.741959 1233900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/stopped-upgrade-966595/id_rsa Username:docker}
	W1212 01:06:41.829383 1233900 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1212 01:06:41.829461 1233900 ssh_runner.go:195] Run: systemctl --version
	I1212 01:06:41.903707 1233900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:06:42.064750 1233900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 01:06:42.070780 1233900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:06:42.095827 1233900 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 01:06:42.095961 1233900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:06:42.130256 1233900 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:06:42.130281 1233900 start.go:475] detecting cgroup driver to use...
	I1212 01:06:42.130314 1233900 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 01:06:42.130369 1233900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:06:42.159485 1233900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:06:42.172753 1233900 docker.go:203] disabling cri-docker service (if available) ...
	I1212 01:06:42.172857 1233900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:06:42.185854 1233900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:06:42.198733 1233900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1212 01:06:42.212579 1233900 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1212 01:06:42.212655 1233900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:06:42.323100 1233900 docker.go:219] disabling docker service ...
	I1212 01:06:42.323175 1233900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:06:42.336473 1233900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:06:42.349543 1233900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:06:42.453485 1233900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:06:42.572782 1233900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:06:42.584755 1233900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:06:42.602326 1233900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 01:06:42.602413 1233900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:06:42.616297 1233900 out.go:177] 
	W1212 01:06:42.618439 1233900 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1212 01:06:42.618465 1233900 out.go:239] * 
	* 
	W1212 01:06:42.619514 1233900 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:06:42.622449 1233900 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-966595 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (98.33s)

                                                
                                    

Test pass (270/314)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.91
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 9.4
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 26.69
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
23 TestDownloadOnly/DeleteAll 0.24
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
26 TestBinaryMirror 0.64
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
32 TestAddons/Setup 161.08
34 TestAddons/parallel/Registry 14.58
36 TestAddons/parallel/InspektorGadget 11.14
37 TestAddons/parallel/MetricsServer 5.87
41 TestAddons/parallel/Headlamp 12.13
42 TestAddons/parallel/CloudSpanner 5.62
44 TestAddons/parallel/NvidiaDevicePlugin 5.62
47 TestAddons/serial/GCPAuth/Namespaces 0.16
48 TestAddons/StoppedEnableDisable 12.31
49 TestCertOptions 38.02
50 TestCertExpiration 253.93
52 TestForceSystemdFlag 41.82
53 TestForceSystemdEnv 45.51
59 TestErrorSpam/setup 34.33
60 TestErrorSpam/start 0.87
61 TestErrorSpam/status 1.11
62 TestErrorSpam/pause 1.83
63 TestErrorSpam/unpause 1.98
64 TestErrorSpam/stop 1.47
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 79.29
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 34.89
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.78
76 TestFunctional/serial/CacheCmd/cache/add_local 1.12
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
81 TestFunctional/serial/CacheCmd/cache/delete 0.15
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
84 TestFunctional/serial/ExtraConfig 31.14
85 TestFunctional/serial/ComponentHealth 0.12
86 TestFunctional/serial/LogsCmd 1.84
87 TestFunctional/serial/LogsFileCmd 1.9
88 TestFunctional/serial/InvalidService 4.39
90 TestFunctional/parallel/ConfigCmd 0.6
91 TestFunctional/parallel/DashboardCmd 30.34
92 TestFunctional/parallel/DryRun 0.51
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.13
98 TestFunctional/parallel/ServiceCmdConnect 35.64
99 TestFunctional/parallel/AddonsCmd 0.18
102 TestFunctional/parallel/SSHCmd 0.78
103 TestFunctional/parallel/CpCmd 1.62
105 TestFunctional/parallel/FileSync 0.31
106 TestFunctional/parallel/CertSync 1.94
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
114 TestFunctional/parallel/License 0.34
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
121 TestFunctional/parallel/ServiceCmd/List 0.56
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
124 TestFunctional/parallel/ServiceCmd/Format 0.43
125 TestFunctional/parallel/ServiceCmd/URL 0.42
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/MountCmd/any-port 37.41
131 TestFunctional/parallel/MountCmd/specific-port 2.15
132 TestFunctional/parallel/MountCmd/VerifyCleanup 2.36
133 TestFunctional/parallel/Version/short 0.08
134 TestFunctional/parallel/Version/components 1.16
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.24
140 TestFunctional/parallel/ImageCommands/Setup 2.62
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.41
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.03
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.54
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.9
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.27
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.96
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_addon-resizer_images 0.08
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestIngressAddonLegacy/StartLegacyK8sCluster 88.33
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.67
168 TestJSONOutput/start/Command 80.13
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.83
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.74
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 5.91
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.28
193 TestKicCustomNetwork/create_custom_network 48.4
194 TestKicCustomNetwork/use_default_bridge_network 32.89
195 TestKicExistingNetwork 36.08
196 TestKicCustomSubnet 36.34
197 TestKicStaticIP 38.33
198 TestMainNoArgs 0.08
199 TestMinikubeProfile 71.47
202 TestMountStart/serial/StartWithMountFirst 7.08
203 TestMountStart/serial/VerifyMountFirst 0.3
204 TestMountStart/serial/StartWithMountSecond 6.9
205 TestMountStart/serial/VerifyMountSecond 0.3
206 TestMountStart/serial/DeleteFirst 1.67
207 TestMountStart/serial/VerifyMountPostDelete 0.3
208 TestMountStart/serial/Stop 1.22
209 TestMountStart/serial/RestartStopped 7.71
210 TestMountStart/serial/VerifyMountPostStop 0.3
213 TestMultiNode/serial/FreshStart2Nodes 81.14
214 TestMultiNode/serial/DeployApp2Nodes 4.91
216 TestMultiNode/serial/AddNode 50.37
217 TestMultiNode/serial/MultiNodeLabels 0.1
218 TestMultiNode/serial/ProfileList 0.35
219 TestMultiNode/serial/CopyFile 11.21
220 TestMultiNode/serial/StopNode 2.37
221 TestMultiNode/serial/StartAfterStop 13.38
222 TestMultiNode/serial/RestartKeepsNodes 123.63
223 TestMultiNode/serial/DeleteNode 5.11
224 TestMultiNode/serial/StopMultiNode 23.99
225 TestMultiNode/serial/RestartMultiNode 84.69
226 TestMultiNode/serial/ValidateNameConflict 32.82
231 TestPreload 179.34
233 TestScheduledStopUnix 107.77
236 TestInsufficientStorage 11.01
239 TestKubernetesUpgrade 384.9
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
243 TestNoKubernetes/serial/StartWithK8s 39.71
244 TestNoKubernetes/serial/StartWithStopK8s 9.86
245 TestNoKubernetes/serial/Start 9.84
246 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
247 TestNoKubernetes/serial/ProfileList 1.09
248 TestNoKubernetes/serial/Stop 1.27
249 TestNoKubernetes/serial/StartNoArgs 7.71
250 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
251 TestStoppedBinaryUpgrade/Setup 1.3
253 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
262 TestPause/serial/Start 77.22
263 TestPause/serial/SecondStartNoReconfiguration 30.62
264 TestPause/serial/Pause 1.17
265 TestPause/serial/VerifyStatus 0.45
266 TestPause/serial/Unpause 1.07
267 TestPause/serial/PauseAgain 1.36
268 TestPause/serial/DeletePaused 2.95
269 TestPause/serial/VerifyDeletedResources 0.46
277 TestNetworkPlugins/group/false 6.3
282 TestStartStop/group/old-k8s-version/serial/FirstStart 120.39
283 TestStartStop/group/old-k8s-version/serial/DeployApp 9.54
284 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
285 TestStartStop/group/old-k8s-version/serial/Stop 12.1
286 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
287 TestStartStop/group/old-k8s-version/serial/SecondStart 432.64
289 TestStartStop/group/no-preload/serial/FirstStart 70.31
290 TestStartStop/group/no-preload/serial/DeployApp 9.99
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
292 TestStartStop/group/no-preload/serial/Stop 12.05
293 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
294 TestStartStop/group/no-preload/serial/SecondStart 352.87
295 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
296 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.16
297 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
298 TestStartStop/group/old-k8s-version/serial/Pause 5.04
300 TestStartStop/group/embed-certs/serial/FirstStart 86.77
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.05
302 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
303 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
304 TestStartStop/group/no-preload/serial/Pause 3.75
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.68
307 TestStartStop/group/embed-certs/serial/DeployApp 8.57
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.96
309 TestStartStop/group/embed-certs/serial/Stop 12.16
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
311 TestStartStop/group/embed-certs/serial/SecondStart 346.07
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.6
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.06
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 356.2
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 15.06
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
320 TestStartStop/group/embed-certs/serial/Pause 4.19
322 TestStartStop/group/newest-cni/serial/FirstStart 57.09
323 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 20.12
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
325 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
326 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.04
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.97
329 TestStartStop/group/newest-cni/serial/Stop 1.76
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
331 TestStartStop/group/newest-cni/serial/SecondStart 37.05
332 TestNetworkPlugins/group/auto/Start 82.36
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
336 TestStartStop/group/newest-cni/serial/Pause 3.51
337 TestNetworkPlugins/group/kindnet/Start 81.71
338 TestNetworkPlugins/group/auto/KubeletFlags 0.36
339 TestNetworkPlugins/group/auto/NetCatPod 11.43
340 TestNetworkPlugins/group/auto/DNS 0.21
341 TestNetworkPlugins/group/auto/Localhost 0.18
342 TestNetworkPlugins/group/auto/HairPin 0.17
343 TestNetworkPlugins/group/calico/Start 81.01
344 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
345 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
346 TestNetworkPlugins/group/kindnet/NetCatPod 12.38
347 TestNetworkPlugins/group/kindnet/DNS 0.27
348 TestNetworkPlugins/group/kindnet/Localhost 0.26
349 TestNetworkPlugins/group/kindnet/HairPin 0.27
350 TestNetworkPlugins/group/custom-flannel/Start 72.93
351 TestNetworkPlugins/group/calico/ControllerPod 5.06
352 TestNetworkPlugins/group/calico/KubeletFlags 0.45
353 TestNetworkPlugins/group/calico/NetCatPod 12.53
354 TestNetworkPlugins/group/calico/DNS 0.24
355 TestNetworkPlugins/group/calico/Localhost 0.18
356 TestNetworkPlugins/group/calico/HairPin 0.22
357 TestNetworkPlugins/group/enable-default-cni/Start 93.49
358 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
359 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.58
360 TestNetworkPlugins/group/custom-flannel/DNS 0.24
361 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
362 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
363 TestNetworkPlugins/group/flannel/Start 69.26
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.43
366 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
367 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
368 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
369 TestNetworkPlugins/group/flannel/ControllerPod 5.04
370 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
371 TestNetworkPlugins/group/flannel/NetCatPod 12.52
372 TestNetworkPlugins/group/flannel/DNS 0.26
373 TestNetworkPlugins/group/bridge/Start 50.29
374 TestNetworkPlugins/group/flannel/Localhost 0.26
375 TestNetworkPlugins/group/flannel/HairPin 0.21
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
377 TestNetworkPlugins/group/bridge/NetCatPod 10.34
378 TestNetworkPlugins/group/bridge/DNS 21.42
379 TestNetworkPlugins/group/bridge/Localhost 0.15
380 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (14.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-661903 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-661903 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.909720379s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-661903
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-661903: exit status 85 (88.558852ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-661903 | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |          |
	|         | -p download-only-661903        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:10:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:10:56.766030 1117389 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:10:56.766186 1117389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:56.766195 1117389 out.go:309] Setting ErrFile to fd 2...
	I1212 00:10:56.766201 1117389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:56.766456 1117389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	W1212 00:10:56.766604 1117389 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17764-1111943/.minikube/config/config.json: open /home/jenkins/minikube-integration/17764-1111943/.minikube/config/config.json: no such file or directory
	I1212 00:10:56.767005 1117389 out.go:303] Setting JSON to true
	I1212 00:10:56.767840 1117389 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24803,"bootTime":1702315054,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:10:56.767913 1117389 start.go:138] virtualization:  
	I1212 00:10:56.771040 1117389 out.go:97] [download-only-661903] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:10:56.773040 1117389 out.go:169] MINIKUBE_LOCATION=17764
	W1212 00:10:56.771246 1117389 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 00:10:56.771292 1117389 notify.go:220] Checking for updates...
	I1212 00:10:56.774903 1117389 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:10:56.776551 1117389 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:10:56.778727 1117389 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:10:56.780833 1117389 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 00:10:56.785088 1117389 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:10:56.785404 1117389 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:10:56.809486 1117389 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:10:56.809589 1117389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:10:56.889496 1117389 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-12 00:10:56.879019889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:10:56.889600 1117389 docker.go:295] overlay module found
	I1212 00:10:56.892170 1117389 out.go:97] Using the docker driver based on user configuration
	I1212 00:10:56.892198 1117389 start.go:298] selected driver: docker
	I1212 00:10:56.892205 1117389 start.go:902] validating driver "docker" against <nil>
	I1212 00:10:56.892309 1117389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:10:56.976026 1117389 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-12 00:10:56.966324361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:10:56.976195 1117389 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:10:56.976486 1117389 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1212 00:10:56.976641 1117389 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 00:10:56.978805 1117389 out.go:169] Using Docker driver with root privileges
	I1212 00:10:56.980618 1117389 cni.go:84] Creating CNI manager for ""
	I1212 00:10:56.980633 1117389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:10:56.980644 1117389 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:10:56.980655 1117389 start_flags.go:323] config:
	{Name:download-only-661903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-661903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:10:56.982804 1117389 out.go:97] Starting control plane node download-only-661903 in cluster download-only-661903
	I1212 00:10:56.982821 1117389 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 00:10:56.984723 1117389 out.go:97] Pulling base image ...
	I1212 00:10:56.984742 1117389 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 00:10:56.984841 1117389 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:10:57.002756 1117389 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:10:57.002959 1117389 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory
	I1212 00:10:57.003058 1117389 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:10:57.048110 1117389 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:10:57.048145 1117389 cache.go:56] Caching tarball of preloaded images
	I1212 00:10:57.048323 1117389 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 00:10:57.050640 1117389 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 00:10:57.050660 1117389 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:10:57.160035 1117389 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:11:05.530089 1117389 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-661903"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (9.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-661903 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-661903 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.402204537s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (9.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-661903
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-661903: exit status 85 (86.418436ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-661903 | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |          |
	|         | -p download-only-661903        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-661903 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |          |
	|         | -p download-only-661903        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:11:11
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:11.762602 1117462 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:11:11.762805 1117462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:11.762817 1117462 out.go:309] Setting ErrFile to fd 2...
	I1212 00:11:11.762824 1117462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:11.763113 1117462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	W1212 00:11:11.763293 1117462 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17764-1111943/.minikube/config/config.json: open /home/jenkins/minikube-integration/17764-1111943/.minikube/config/config.json: no such file or directory
	I1212 00:11:11.763575 1117462 out.go:303] Setting JSON to true
	I1212 00:11:11.764464 1117462 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24818,"bootTime":1702315054,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:11:11.764534 1117462 start.go:138] virtualization:  
	I1212 00:11:11.767104 1117462 out.go:97] [download-only-661903] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:11:11.771236 1117462 out.go:169] MINIKUBE_LOCATION=17764
	I1212 00:11:11.767457 1117462 notify.go:220] Checking for updates...
	I1212 00:11:11.775482 1117462 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:11.777523 1117462 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:11:11.779737 1117462 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:11:11.782375 1117462 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 00:11:11.786725 1117462 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:11:11.787285 1117462 config.go:182] Loaded profile config "download-only-661903": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1212 00:11:11.787371 1117462 start.go:810] api.Load failed for download-only-661903: filestore "download-only-661903": Docker machine "download-only-661903" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 00:11:11.787489 1117462 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 00:11:11.787517 1117462 start.go:810] api.Load failed for download-only-661903: filestore "download-only-661903": Docker machine "download-only-661903" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 00:11:11.810896 1117462 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:11:11.811019 1117462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:11.891330 1117462 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-12 00:11:11.881961005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:11.891434 1117462 docker.go:295] overlay module found
	I1212 00:11:11.893884 1117462 out.go:97] Using the docker driver based on existing profile
	I1212 00:11:11.893910 1117462 start.go:298] selected driver: docker
	I1212 00:11:11.893917 1117462 start.go:902] validating driver "docker" against &{Name:download-only-661903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-661903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:11:11.894084 1117462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:11.960527 1117462 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-12 00:11:11.950808519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:11.961020 1117462 cni.go:84] Creating CNI manager for ""
	I1212 00:11:11.961040 1117462 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:11:11.961054 1117462 start_flags.go:323] config:
	{Name:download-only-661903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-661903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1212 00:11:11.963380 1117462 out.go:97] Starting control plane node download-only-661903 in cluster download-only-661903
	I1212 00:11:11.963400 1117462 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 00:11:11.965416 1117462 out.go:97] Pulling base image ...
	I1212 00:11:11.965439 1117462 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:11:11.965556 1117462 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:11:11.982279 1117462 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:11:11.982393 1117462 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory
	I1212 00:11:11.982416 1117462 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory, skipping pull
	I1212 00:11:11.982420 1117462 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in cache, skipping pull
	I1212 00:11:11.982431 1117462 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 as a tarball
	I1212 00:11:12.036641 1117462 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1212 00:11:12.036665 1117462 cache.go:56] Caching tarball of preloaded images
	I1212 00:11:12.036832 1117462 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 00:11:12.039540 1117462 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1212 00:11:12.039584 1117462 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:11:12.162470 1117462 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-661903"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (26.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-661903 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-661903 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (26.688911824s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (26.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-661903
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-661903: exit status 85 (92.569499ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-661903 | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |          |
	|         | -p download-only-661903           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-661903 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |          |
	|         | -p download-only-661903           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-661903 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |          |
	|         | -p download-only-661903           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:11:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:21.252660 1117534 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:11:21.252895 1117534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:21.252923 1117534 out.go:309] Setting ErrFile to fd 2...
	I1212 00:11:21.252943 1117534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:21.253277 1117534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	W1212 00:11:21.253457 1117534 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17764-1111943/.minikube/config/config.json: open /home/jenkins/minikube-integration/17764-1111943/.minikube/config/config.json: no such file or directory
	I1212 00:11:21.253753 1117534 out.go:303] Setting JSON to true
	I1212 00:11:21.254655 1117534 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24828,"bootTime":1702315054,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:11:21.254753 1117534 start.go:138] virtualization:  
	I1212 00:11:21.257758 1117534 out.go:97] [download-only-661903] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:11:21.258056 1117534 notify.go:220] Checking for updates...
	I1212 00:11:21.260908 1117534 out.go:169] MINIKUBE_LOCATION=17764
	I1212 00:11:21.262814 1117534 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:21.264590 1117534 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:11:21.266634 1117534 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:11:21.268893 1117534 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 00:11:21.272599 1117534 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:11:21.273124 1117534 config.go:182] Loaded profile config "download-only-661903": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1212 00:11:21.273214 1117534 start.go:810] api.Load failed for download-only-661903: filestore "download-only-661903": Docker machine "download-only-661903" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 00:11:21.273351 1117534 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 00:11:21.273381 1117534 start.go:810] api.Load failed for download-only-661903: filestore "download-only-661903": Docker machine "download-only-661903" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 00:11:21.295894 1117534 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:11:21.295990 1117534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:21.375587 1117534 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-12 00:11:21.36579391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:21.375687 1117534 docker.go:295] overlay module found
	I1212 00:11:21.378066 1117534 out.go:97] Using the docker driver based on existing profile
	I1212 00:11:21.378087 1117534 start.go:298] selected driver: docker
	I1212 00:11:21.378094 1117534 start.go:902] validating driver "docker" against &{Name:download-only-661903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-661903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:11:21.378258 1117534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:21.452541 1117534 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-12 00:11:21.443292407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:11:21.453069 1117534 cni.go:84] Creating CNI manager for ""
	I1212 00:11:21.453087 1117534 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:11:21.453100 1117534 start_flags.go:323] config:
	{Name:download-only-661903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-661903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1212 00:11:21.455475 1117534 out.go:97] Starting control plane node download-only-661903 in cluster download-only-661903
	I1212 00:11:21.455511 1117534 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 00:11:21.457401 1117534 out.go:97] Pulling base image ...
	I1212 00:11:21.457433 1117534 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 00:11:21.457535 1117534 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local docker daemon
	I1212 00:11:21.473993 1117534 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 to local cache
	I1212 00:11:21.474123 1117534 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory
	I1212 00:11:21.474141 1117534 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 in local cache directory, skipping pull
	I1212 00:11:21.474145 1117534 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 exists in cache, skipping pull
	I1212 00:11:21.474153 1117534 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 as a tarball
	I1212 00:11:21.530121 1117534 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I1212 00:11:21.530164 1117534 cache.go:56] Caching tarball of preloaded images
	I1212 00:11:21.530329 1117534 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 00:11:21.532643 1117534 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 00:11:21.532679 1117534 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:11:21.653944 1117534 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:bd957ca630cc13aa5437f453b5022da5 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I1212 00:11:40.013920 1117534 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:11:40.014059 1117534 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I1212 00:11:40.891125 1117534 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1212 00:11:40.891257 1117534 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/download-only-661903/config.json ...
	I1212 00:11:40.891492 1117534 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 00:11:40.891698 1117534 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17764-1111943/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-661903"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-661903
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-675945 --alsologtostderr --binary-mirror http://127.0.0.1:41867 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-675945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-675945
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-513852
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-513852: exit status 85 (89.548174ms)

                                                
                                                
-- stdout --
	* Profile "addons-513852" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-513852"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-513852
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-513852: exit status 85 (101.053193ms)

                                                
                                                
-- stdout --
	* Profile "addons-513852" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-513852"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (161.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-513852 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-513852 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m41.080737873s)
--- PASS: TestAddons/Setup (161.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 64.950725ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-nztsx" [d6d72673-3fd0-4b6a-8d6c-7ebec393d5cf] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.018383658s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v7h4s" [a63d003e-1e86-4e98-8cec-b7ede232f639] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014509182s
addons_test.go:339: (dbg) Run:  kubectl --context addons-513852 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-513852 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-513852 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.478376737s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-513852 ip
2023/12/12 00:14:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-513852 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.58s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.14s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tp69l" [a998e6b4-44a3-4391-929c-ac0608c4108d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.016848832s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-513852
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-513852: (6.117925991s)
--- PASS: TestAddons/parallel/InspektorGadget (11.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 15.140481ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-q8k8b" [ea3981e3-770c-404a-aa8d-66a2d769677f] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.017340056s
addons_test.go:414: (dbg) Run:  kubectl --context addons-513852 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-513852 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-513852 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-513852 --alsologtostderr -v=1: (1.090878721s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-zdxdm" [1b9a978a-7876-4fc7-8767-f61cd25f30f3] Pending
helpers_test.go:344: "headlamp-777fd4b855-zdxdm" [1b9a978a-7876-4fc7-8767-f61cd25f30f3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-zdxdm" [1b9a978a-7876-4fc7-8767-f61cd25f30f3] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.03376503s
--- PASS: TestAddons/parallel/Headlamp (12.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-nlpcx" [9f08821a-e48f-4378-a7bc-8fb0e2017554] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011892971s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-513852
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ssl96" [97efc1d3-32a2-484f-90ee-d7d726a4211f] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.075461706s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-513852
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-513852 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-513852 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-513852
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-513852: (11.993100705s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-513852
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-513852
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-513852
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (38.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-788967 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1212 01:11:20.553477 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 01:11:43.197191 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-788967 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.21250884s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-788967 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-788967 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-788967 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-788967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-788967
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-788967: (2.0758325s)
--- PASS: TestCertOptions (38.02s)

                                                
                                    
x
+
TestCertExpiration (253.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-056132 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-056132 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.539943921s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-056132 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-056132 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (28.634138772s)
helpers_test.go:175: Cleaning up "cert-expiration-056132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-056132
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-056132: (2.75928081s)
--- PASS: TestCertExpiration (253.93s)

                                                
                                    
x
+
TestForceSystemdFlag (41.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-687419 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-687419 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.753790482s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-687419 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-687419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-687419
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-687419: (2.656794989s)
--- PASS: TestForceSystemdFlag (41.82s)

                                                
                                    
x
+
TestForceSystemdEnv (45.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-654383 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-654383 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.877414173s)
helpers_test.go:175: Cleaning up "force-systemd-env-654383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-654383
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-654383: (2.631449733s)
--- PASS: TestForceSystemdEnv (45.51s)

                                                
                                    
x
+
TestErrorSpam/setup (34.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-965302 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-965302 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-965302 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-965302 --driver=docker  --container-runtime=crio: (34.332488353s)
--- PASS: TestErrorSpam/setup (34.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 unpause
--- PASS: TestErrorSpam/unpause (1.98s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 stop: (1.249905306s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965302 --log_dir /tmp/nospam-965302 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17764-1111943/.minikube/files/etc/test/nested/copy/1117383/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.29s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885247 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-885247 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.294711307s)
--- PASS: TestFunctional/serial/StartWithProxy (79.29s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885247 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-885247 --alsologtostderr -v=8: (34.88490077s)
functional_test.go:659: soft start took 34.885407726s for "functional-885247" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-885247 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 cache add registry.k8s.io/pause:3.1: (1.228611665s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 cache add registry.k8s.io/pause:3.3: (1.32368123s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 cache add registry.k8s.io/pause:latest: (1.223015704s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-885247 /tmp/TestFunctionalserialCacheCmdcacheadd_local1583882860/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 cache add minikube-local-cache-test:functional-885247
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 cache delete minikube-local-cache-test:functional-885247
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-885247
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885247 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (333.818324ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 cache reload: (1.046934768s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 kubectl -- --context functional-885247 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-885247 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885247 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-885247 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.143470118s)
functional_test.go:757: restart took 31.143579842s for "functional-885247" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.14s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-885247 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 logs: (1.841874568s)
--- PASS: TestFunctional/serial/LogsCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 logs --file /tmp/TestFunctionalserialLogsFileCmd609812836/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 logs --file /tmp/TestFunctionalserialLogsFileCmd609812836/001/logs.txt: (1.896444153s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.90s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-885247 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-885247
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-885247: exit status 115 (562.846069ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30361 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-885247 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885247 config get cpus: exit status 14 (111.821972ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885247 config get cpus: exit status 14 (102.35035ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-885247 --alsologtostderr -v=1]
2023/12/12 00:33:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-885247 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1141980: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-885247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (219.944553ms)

                                                
                                                
-- stdout --
	* [functional-885247] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:32:56.696560 1141764 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:32:56.696772 1141764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:32:56.696783 1141764 out.go:309] Setting ErrFile to fd 2...
	I1212 00:32:56.696808 1141764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:32:56.697177 1141764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:32:56.697754 1141764 out.go:303] Setting JSON to false
	I1212 00:32:56.699043 1141764 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26123,"bootTime":1702315054,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:32:56.699166 1141764 start.go:138] virtualization:  
	I1212 00:32:56.701707 1141764 out.go:177] * [functional-885247] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 00:32:56.703875 1141764 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:32:56.703960 1141764 notify.go:220] Checking for updates...
	I1212 00:32:56.706589 1141764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:32:56.709303 1141764 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:32:56.711377 1141764 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:32:56.713160 1141764 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:32:56.714984 1141764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:32:56.717533 1141764 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:32:56.718271 1141764 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:32:56.743003 1141764 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:32:56.743110 1141764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:32:56.831298 1141764 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-12 00:32:56.821708736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:32:56.831404 1141764 docker.go:295] overlay module found
	I1212 00:32:56.833740 1141764 out.go:177] * Using the docker driver based on existing profile
	I1212 00:32:56.835581 1141764 start.go:298] selected driver: docker
	I1212 00:32:56.835597 1141764 start.go:902] validating driver "docker" against &{Name:functional-885247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-885247 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:32:56.835697 1141764 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:32:56.837889 1141764 out.go:177] 
	W1212 00:32:56.840057 1141764 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:32:56.841831 1141764 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885247 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-885247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-885247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (221.935575ms)

                                                
                                                
-- stdout --
	* [functional-885247] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:32:56.475061 1141724 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:32:56.475275 1141724 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:32:56.475301 1141724 out.go:309] Setting ErrFile to fd 2...
	I1212 00:32:56.475323 1141724 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:32:56.475700 1141724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:32:56.476109 1141724 out.go:303] Setting JSON to false
	I1212 00:32:56.477034 1141724 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26123,"bootTime":1702315054,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 00:32:56.477128 1141724 start.go:138] virtualization:  
	I1212 00:32:56.481111 1141724 out.go:177] * [functional-885247] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1212 00:32:56.484841 1141724 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:32:56.488744 1141724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:32:56.485029 1141724 notify.go:220] Checking for updates...
	I1212 00:32:56.495644 1141724 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 00:32:56.498828 1141724 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 00:32:56.502330 1141724 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:32:56.505113 1141724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:32:56.508470 1141724 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:32:56.509034 1141724 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:32:56.533573 1141724 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 00:32:56.533676 1141724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:32:56.606867 1141724 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-12 00:32:56.59712263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:32:56.606974 1141724 docker.go:295] overlay module found
	I1212 00:32:56.609567 1141724 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1212 00:32:56.611958 1141724 start.go:298] selected driver: docker
	I1212 00:32:56.612011 1141724 start.go:902] validating driver "docker" against &{Name:functional-885247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-885247 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:32:56.612124 1141724 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:32:56.615174 1141724 out.go:177] 
	W1212 00:32:56.617942 1141724 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:32:56.620864 1141724 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-885247 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-885247 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-97t9h" [9731252e-7d41-4efa-86a2-3560a65f3cd7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-97t9h" [9731252e-7d41-4efa-86a2-3560a65f3cd7] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 35.014709871s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32065
functional_test.go:1674: http://192.168.49.2:32065: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-97t9h

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32065
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (35.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh -n functional-885247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 cp functional-885247:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4099322654/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh -n functional-885247 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1117383/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo cat /etc/test/nested/copy/1117383/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1117383.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo cat /etc/ssl/certs/1117383.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1117383.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo cat /usr/share/ca-certificates/1117383.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11173832.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo cat /etc/ssl/certs/11173832.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11173832.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo cat /usr/share/ca-certificates/11173832.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-885247 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885247 ssh "sudo systemctl is-active docker": exit status 1 (304.762485ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885247 ssh "sudo systemctl is-active containerd": exit status 1 (337.174762ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-885247 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-885247 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-885247 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1138560: os: process already finished
helpers_test.go:502: unable to terminate pid 1138437: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-885247 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-885247 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-885247 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-885247 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-skql8" [5a3a66c5-2cb1-4659-9b49-9798499b8bdd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-skql8" [5a3a66c5-2cb1-4659-9b49-9798499b8bdd] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.014475691s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 service list -o json
functional_test.go:1493: Took "550.234544ms" to run "out/minikube-linux-arm64 -p functional-885247 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30951
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30951
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "351.992275ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "75.518333ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "356.493401ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "85.730052ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (37.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdany-port3278134598/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702341133355741109" to /tmp/TestFunctionalparallelMountCmdany-port3278134598/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702341133355741109" to /tmp/TestFunctionalparallelMountCmdany-port3278134598/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702341133355741109" to /tmp/TestFunctionalparallelMountCmdany-port3278134598/001/test-1702341133355741109
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (382.106535ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh -- ls -la /mount-9p
E1212 00:32:14.848589 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:32 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:32 test-1702341133355741109
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh cat /mount-9p/test-1702341133355741109
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-885247 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4288e98e-355c-41b8-96aa-c1ed6d903564] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4288e98e-355c-41b8-96aa-c1ed6d903564] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4288e98e-355c-41b8-96aa-c1ed6d903564] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 34.020774626s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-885247 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdany-port3278134598/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (37.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdspecific-port1150463801/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (374.3004ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdspecific-port1150463801/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885247 ssh "sudo umount -f /mount-9p": exit status 1 (307.252306ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-885247 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdspecific-port1150463801/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068335427/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068335427/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068335427/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T" /mount1: exit status 1 (731.351659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-885247 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068335427/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068335427/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-885247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068335427/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 version -o=json --components: (1.159895209s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-885247 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-885247
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-885247 image ls --format short --alsologtostderr:
I1212 00:33:51.424165 1143404 out.go:296] Setting OutFile to fd 1 ...
I1212 00:33:51.424323 1143404 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:51.424333 1143404 out.go:309] Setting ErrFile to fd 2...
I1212 00:33:51.424339 1143404 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:51.424597 1143404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
I1212 00:33:51.425357 1143404 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 00:33:51.425497 1143404 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 00:33:51.426215 1143404 cli_runner.go:164] Run: docker container inspect functional-885247 --format={{.State.Status}}
I1212 00:33:51.444813 1143404 ssh_runner.go:195] Run: systemctl --version
I1212 00:33:51.444875 1143404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
I1212 00:33:51.463520 1143404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
I1212 00:33:51.558997 1143404 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-885247 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-885247  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/my-image                      | functional-885247  | 6b354c7349a73 | 1.64MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-885247 image ls --format table --alsologtostderr:
I1212 00:33:55.445334 1143735 out.go:296] Setting OutFile to fd 1 ...
I1212 00:33:55.445506 1143735 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:55.445531 1143735 out.go:309] Setting ErrFile to fd 2...
I1212 00:33:55.445551 1143735 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:55.445819 1143735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
I1212 00:33:55.446479 1143735 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 00:33:55.446661 1143735 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 00:33:55.447316 1143735 cli_runner.go:164] Run: docker container inspect functional-885247 --format={{.State.Status}}
I1212 00:33:55.467356 1143735 ssh_runner.go:195] Run: systemctl --version
I1212 00:33:55.467409 1143735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
I1212 00:33:55.490751 1143735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
I1212 00:33:55.591098 1143735 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-885247 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0
fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"f6aaf846a675978b2641fd40477afa2be104dd3db0e4d149ab532eb8df9fefb0","repoDigests":["docker.io/library/435b1ad9ecd485b58bd04c2dfbefb64ef28681c4a88304b46688d80606533f64-tmp@sha256:da1c028c748b3f2163e6d9666994dbe6f53a60f78497750602d127cdb9144f52"],"repoTags":[],"size":"1637644"},{"id":"6b354c7349a732a222d5d752915f314518dd9eb3650caaf23caa63bf7fa75a68","repoDigests":["localhost/my-image@sha256:51a37f2fd69cf14a59ebbba6c33401bf45aae712af385d95d820159321ecca6a"],"re
poTags":["localhost/my-image:functional-885247"],"size":"1640226"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registr
y.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-885247"],"size":"34114467"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io
/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"9cdd6470f48c8b1275
30b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["reg
istry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-885247 image ls --format json --alsologtostderr:
I1212 00:33:55.178780 1143702 out.go:296] Setting OutFile to fd 1 ...
I1212 00:33:55.178981 1143702 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:55.179010 1143702 out.go:309] Setting ErrFile to fd 2...
I1212 00:33:55.179030 1143702 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:55.179334 1143702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
I1212 00:33:55.180035 1143702 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 00:33:55.180211 1143702 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 00:33:55.180834 1143702 cli_runner.go:164] Run: docker container inspect functional-885247 --format={{.State.Status}}
I1212 00:33:55.202311 1143702 ssh_runner.go:195] Run: systemctl --version
I1212 00:33:55.202365 1143702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
I1212 00:33:55.223135 1143702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
I1212 00:33:55.323575 1143702 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-885247 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-885247
size: "34114467"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-885247 image ls --format yaml --alsologtostderr:
I1212 00:33:51.684221 1143432 out.go:296] Setting OutFile to fd 1 ...
I1212 00:33:51.684425 1143432 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:51.684437 1143432 out.go:309] Setting ErrFile to fd 2...
I1212 00:33:51.684443 1143432 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:51.684808 1143432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
I1212 00:33:51.686829 1143432 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 00:33:51.687055 1143432 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 00:33:51.687795 1143432 cli_runner.go:164] Run: docker container inspect functional-885247 --format={{.State.Status}}
I1212 00:33:51.706838 1143432 ssh_runner.go:195] Run: systemctl --version
I1212 00:33:51.706900 1143432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
I1212 00:33:51.726986 1143432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
I1212 00:33:51.823003 1143432 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-885247 ssh pgrep buildkitd: exit status 1 (298.610035ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image build -t localhost/my-image:functional-885247 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 image build -t localhost/my-image:functional-885247 testdata/build --alsologtostderr: (2.673949539s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-885247 image build -t localhost/my-image:functional-885247 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f6aaf846a67
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-885247
--> 6b354c7349a
Successfully tagged localhost/my-image:functional-885247
6b354c7349a732a222d5d752915f314518dd9eb3650caaf23caa63bf7fa75a68
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-885247 image build -t localhost/my-image:functional-885247 testdata/build --alsologtostderr:
I1212 00:33:52.234932 1143506 out.go:296] Setting OutFile to fd 1 ...
I1212 00:33:52.235531 1143506 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:52.235544 1143506 out.go:309] Setting ErrFile to fd 2...
I1212 00:33:52.235550 1143506 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:52.235832 1143506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
I1212 00:33:52.236526 1143506 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 00:33:52.237184 1143506 config.go:182] Loaded profile config "functional-885247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 00:33:52.237735 1143506 cli_runner.go:164] Run: docker container inspect functional-885247 --format={{.State.Status}}
I1212 00:33:52.256678 1143506 ssh_runner.go:195] Run: systemctl --version
I1212 00:33:52.256729 1143506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-885247
I1212 00:33:52.276550 1143506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/functional-885247/id_rsa Username:docker}
I1212 00:33:52.371061 1143506 build_images.go:151] Building image from path: /tmp/build.1815597662.tar
I1212 00:33:52.371157 1143506 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:33:52.381638 1143506 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1815597662.tar
I1212 00:33:52.385956 1143506 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1815597662.tar: stat -c "%s %y" /var/lib/minikube/build/build.1815597662.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1815597662.tar': No such file or directory
I1212 00:33:52.385985 1143506 ssh_runner.go:362] scp /tmp/build.1815597662.tar --> /var/lib/minikube/build/build.1815597662.tar (3072 bytes)
I1212 00:33:52.414332 1143506 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1815597662
I1212 00:33:52.425428 1143506 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1815597662 -xf /var/lib/minikube/build/build.1815597662.tar
I1212 00:33:52.438978 1143506 crio.go:297] Building image: /var/lib/minikube/build/build.1815597662
I1212 00:33:52.439050 1143506 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-885247 /var/lib/minikube/build/build.1815597662 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1212 00:33:54.814759 1143506 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-885247 /var/lib/minikube/build/build.1815597662 --cgroup-manager=cgroupfs: (2.375671284s)
I1212 00:33:54.814850 1143506 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1815597662
I1212 00:33:54.825669 1143506 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1815597662.tar
I1212 00:33:54.836267 1143506 build_images.go:207] Built localhost/my-image:functional-885247 from /tmp/build.1815597662.tar
I1212 00:33:54.836303 1143506 build_images.go:123] succeeded building to: functional-885247
I1212 00:33:54.836308 1143506 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.599488821s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-885247
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image load --daemon gcr.io/google-containers/addon-resizer:functional-885247 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 image load --daemon gcr.io/google-containers/addon-resizer:functional-885247 --alsologtostderr: (4.162113315s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image load --daemon gcr.io/google-containers/addon-resizer:functional-885247 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 image load --daemon gcr.io/google-containers/addon-resizer:functional-885247 --alsologtostderr: (2.792603678s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.726451905s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-885247
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image load --daemon gcr.io/google-containers/addon-resizer:functional-885247 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 image load --daemon gcr.io/google-containers/addon-resizer:functional-885247 --alsologtostderr: (3.532718117s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image save gcr.io/google-containers/addon-resizer:functional-885247 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image rm gcr.io/google-containers/addon-resizer:functional-885247 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-885247 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.009463316s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-885247
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 image save --daemon gcr.io/google-containers/addon-resizer:functional-885247 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-885247
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-885247 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-885247 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-885247
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-885247
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-885247
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (88.33s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-996779 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1212 00:34:31.003383 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 00:34:58.688811 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-996779 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m28.330630784s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (88.33s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-996779 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-173275 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1212 00:43:45.190373 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:44:31.004070 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-173275 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m20.130433645s)
--- PASS: TestJSONOutput/start/Command (80.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.83s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-173275 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-173275 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-173275 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-173275 --output=json --user=testUser: (5.913596122s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-988595 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-988595 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (100.862458ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e76432c7-de2c-48af-90e9-db2ebde70000","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-988595] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"00e2eec7-8111-484d-8e6f-f87fd95329a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17764"}}
	{"specversion":"1.0","id":"d0eab706-995e-4b81-aec6-a8ba5ecf8480","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"35b9dae2-9e71-436f-8525-3672432e6b09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig"}}
	{"specversion":"1.0","id":"643f9e25-593b-49b5-9f1b-59a33080f052","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube"}}
	{"specversion":"1.0","id":"6aee5506-5959-40ea-88cb-7f0d4fcf1df3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e365a0c7-424d-4d26-8430-2dd074307011","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4962c7e7-1613-4cbe-829a-bc620f794bc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-988595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-988595
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (48.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-374056 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-374056 --network=: (46.317357998s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-374056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-374056
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-374056: (2.060870614s)
--- PASS: TestKicCustomNetwork/create_custom_network (48.40s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-796192 --network=bridge
E1212 00:45:54.049018 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-796192 --network=bridge: (30.846944489s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-796192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-796192
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-796192: (2.016314099s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.89s)

                                                
                                    
x
+
TestKicExistingNetwork (36.08s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-545012 --network=existing-network
E1212 00:46:43.197045 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:46:43.202325 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:46:43.212507 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:46:43.232813 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:46:43.273054 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:46:43.353393 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:46:43.513749 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:46:43.834338 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:46:44.475066 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:46:45.755281 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-545012 --network=existing-network: (33.88152827s)
helpers_test.go:175: Cleaning up "existing-network-545012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-545012
E1212 00:46:48.316428 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-545012: (2.032277173s)
--- PASS: TestKicExistingNetwork (36.08s)

                                                
                                    
x
+
TestKicCustomSubnet (36.34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-466757 --subnet=192.168.60.0/24
E1212 00:46:53.436687 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:47:03.677754 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-466757 --subnet=192.168.60.0/24: (34.255498221s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-466757 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-466757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-466757
E1212 00:47:24.158533 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-466757: (2.050361594s)
--- PASS: TestKicCustomSubnet (36.34s)

                                                
                                    
x
+
TestKicStaticIP (38.33s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-796796 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-796796 --static-ip=192.168.200.200: (36.086602282s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-796796 ip
helpers_test.go:175: Cleaning up "static-ip-796796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-796796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-796796: (2.068694887s)
--- PASS: TestKicStaticIP (38.33s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (71.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-518756 --driver=docker  --container-runtime=crio
E1212 00:48:05.119746 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 00:48:17.506863 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-518756 --driver=docker  --container-runtime=crio: (33.536493281s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-521734 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-521734 --driver=docker  --container-runtime=crio: (32.568316346s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-518756
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-521734
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-521734" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-521734
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-521734: (2.019546944s)
helpers_test.go:175: Cleaning up "first-518756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-518756
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-518756: (2.003412805s)
--- PASS: TestMinikubeProfile (71.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-676236 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-676236 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.077293881s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-676236 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-678156 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1212 00:49:27.039975 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-678156 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.901521972s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-678156 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-676236 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-676236 --alsologtostderr -v=5: (1.664837537s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-678156 ssh -- ls /minikube-host
E1212 00:49:31.003872 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-678156
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-678156: (1.219884455s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.71s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-678156
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-678156: (6.71408085s)
--- PASS: TestMountStart/serial/RestartStopped (7.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-678156 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-270339 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-270339 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.587671782s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-270339 -- rollout status deployment/busybox: (2.84608917s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-f7wq7 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-tqh9c -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-f7wq7 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-tqh9c -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-f7wq7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-270339 -- exec busybox-5bc68d56bd-tqh9c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-270339 -v 3 --alsologtostderr
E1212 00:51:43.197326 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-270339 -v 3 --alsologtostderr: (49.63061123s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-270339 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp testdata/cp-test.txt multinode-270339:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp multinode-270339:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4195136942/001/cp-test_multinode-270339.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp multinode-270339:/home/docker/cp-test.txt multinode-270339-m02:/home/docker/cp-test_multinode-270339_multinode-270339-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m02 "sudo cat /home/docker/cp-test_multinode-270339_multinode-270339-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp multinode-270339:/home/docker/cp-test.txt multinode-270339-m03:/home/docker/cp-test_multinode-270339_multinode-270339-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m03 "sudo cat /home/docker/cp-test_multinode-270339_multinode-270339-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp testdata/cp-test.txt multinode-270339-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp multinode-270339-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4195136942/001/cp-test_multinode-270339-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp multinode-270339-m02:/home/docker/cp-test.txt multinode-270339:/home/docker/cp-test_multinode-270339-m02_multinode-270339.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339 "sudo cat /home/docker/cp-test_multinode-270339-m02_multinode-270339.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp multinode-270339-m02:/home/docker/cp-test.txt multinode-270339-m03:/home/docker/cp-test_multinode-270339-m02_multinode-270339-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m03 "sudo cat /home/docker/cp-test_multinode-270339-m02_multinode-270339-m03.txt"
E1212 00:52:10.880173 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp testdata/cp-test.txt multinode-270339-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp multinode-270339-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4195136942/001/cp-test_multinode-270339-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp multinode-270339-m03:/home/docker/cp-test.txt multinode-270339:/home/docker/cp-test_multinode-270339-m03_multinode-270339.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339 "sudo cat /home/docker/cp-test_multinode-270339-m03_multinode-270339.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 cp multinode-270339-m03:/home/docker/cp-test.txt multinode-270339-m02:/home/docker/cp-test_multinode-270339-m03_multinode-270339-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 ssh -n multinode-270339-m02 "sudo cat /home/docker/cp-test_multinode-270339-m03_multinode-270339-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-270339 node stop m03: (1.225011518s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-270339 status: exit status 7 (578.74418ms)

                                                
                                                
-- stdout --
	multinode-270339
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-270339-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-270339-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-270339 status --alsologtostderr: exit status 7 (569.588627ms)

                                                
                                                
-- stdout --
	multinode-270339
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-270339-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-270339-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:52:16.228282 1188963 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:52:16.228516 1188963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:52:16.228542 1188963 out.go:309] Setting ErrFile to fd 2...
	I1212 00:52:16.228569 1188963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:52:16.228930 1188963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:52:16.229169 1188963 out.go:303] Setting JSON to false
	I1212 00:52:16.229218 1188963 mustload.go:65] Loading cluster: multinode-270339
	I1212 00:52:16.230142 1188963 notify.go:220] Checking for updates...
	I1212 00:52:16.230368 1188963 config.go:182] Loaded profile config "multinode-270339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:52:16.230402 1188963 status.go:255] checking status of multinode-270339 ...
	I1212 00:52:16.231342 1188963 cli_runner.go:164] Run: docker container inspect multinode-270339 --format={{.State.Status}}
	I1212 00:52:16.254757 1188963 status.go:330] multinode-270339 host status = "Running" (err=<nil>)
	I1212 00:52:16.254806 1188963 host.go:66] Checking if "multinode-270339" exists ...
	I1212 00:52:16.255093 1188963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-270339
	I1212 00:52:16.272354 1188963 host.go:66] Checking if "multinode-270339" exists ...
	I1212 00:52:16.272678 1188963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:52:16.272728 1188963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339
	I1212 00:52:16.302932 1188963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34085 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339/id_rsa Username:docker}
	I1212 00:52:16.400264 1188963 ssh_runner.go:195] Run: systemctl --version
	I1212 00:52:16.405813 1188963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:52:16.419546 1188963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:52:16.492424 1188963 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-12-12 00:52:16.482120124 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 00:52:16.493011 1188963 kubeconfig.go:92] found "multinode-270339" server: "https://192.168.58.2:8443"
	I1212 00:52:16.493032 1188963 api_server.go:166] Checking apiserver status ...
	I1212 00:52:16.493075 1188963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:52:16.506318 1188963 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I1212 00:52:16.517803 1188963 api_server.go:182] apiserver freezer: "5:freezer:/docker/8cbfcb2f926f2933e9f6ac3a1ae628335b89b5892c0a645f94e42abd1790dda6/crio/crio-de16e9a7fe41ea7a2109dd69ccb1dc5027fd22075fb96d6cf98c508f95a14a7e"
	I1212 00:52:16.517870 1188963 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8cbfcb2f926f2933e9f6ac3a1ae628335b89b5892c0a645f94e42abd1790dda6/crio/crio-de16e9a7fe41ea7a2109dd69ccb1dc5027fd22075fb96d6cf98c508f95a14a7e/freezer.state
	I1212 00:52:16.534690 1188963 api_server.go:204] freezer state: "THAWED"
	I1212 00:52:16.534715 1188963 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1212 00:52:16.545179 1188963 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1212 00:52:16.545208 1188963 status.go:421] multinode-270339 apiserver status = Running (err=<nil>)
	I1212 00:52:16.545219 1188963 status.go:257] multinode-270339 status: &{Name:multinode-270339 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:52:16.545341 1188963 status.go:255] checking status of multinode-270339-m02 ...
	I1212 00:52:16.545667 1188963 cli_runner.go:164] Run: docker container inspect multinode-270339-m02 --format={{.State.Status}}
	I1212 00:52:16.562984 1188963 status.go:330] multinode-270339-m02 host status = "Running" (err=<nil>)
	I1212 00:52:16.563009 1188963 host.go:66] Checking if "multinode-270339-m02" exists ...
	I1212 00:52:16.563315 1188963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-270339-m02
	I1212 00:52:16.580451 1188963 host.go:66] Checking if "multinode-270339-m02" exists ...
	I1212 00:52:16.580758 1188963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:52:16.580807 1188963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-270339-m02
	I1212 00:52:16.602842 1188963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34090 SSHKeyPath:/home/jenkins/minikube-integration/17764-1111943/.minikube/machines/multinode-270339-m02/id_rsa Username:docker}
	I1212 00:52:16.699235 1188963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:52:16.712094 1188963 status.go:257] multinode-270339-m02 status: &{Name:multinode-270339-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:52:16.712126 1188963 status.go:255] checking status of multinode-270339-m03 ...
	I1212 00:52:16.712427 1188963 cli_runner.go:164] Run: docker container inspect multinode-270339-m03 --format={{.State.Status}}
	I1212 00:52:16.729814 1188963 status.go:330] multinode-270339-m03 host status = "Stopped" (err=<nil>)
	I1212 00:52:16.729838 1188963 status.go:343] host is not running, skipping remaining checks
	I1212 00:52:16.729845 1188963 status.go:257] multinode-270339-m03 status: &{Name:multinode-270339-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-270339 node start m03 --alsologtostderr: (12.525298385s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-270339
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-270339
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-270339: (24.934076201s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-270339 --wait=true -v=8 --alsologtostderr
E1212 00:53:17.506391 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 00:54:31.003541 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-270339 --wait=true -v=8 --alsologtostderr: (1m38.533183116s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-270339
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-270339 node delete m03: (4.336028773s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 stop
E1212 00:54:40.551635 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-270339 stop: (23.781079172s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-270339 status: exit status 7 (106.135385ms)

                                                
                                                
-- stdout --
	multinode-270339
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-270339-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-270339 status --alsologtostderr: exit status 7 (101.097728ms)

                                                
                                                
-- stdout --
	multinode-270339
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-270339-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:55:02.810101 1197289 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:55:02.810268 1197289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:55:02.810293 1197289 out.go:309] Setting ErrFile to fd 2...
	I1212 00:55:02.810312 1197289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:55:02.810617 1197289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 00:55:02.810823 1197289 out.go:303] Setting JSON to false
	I1212 00:55:02.810894 1197289 mustload.go:65] Loading cluster: multinode-270339
	I1212 00:55:02.810986 1197289 notify.go:220] Checking for updates...
	I1212 00:55:02.811353 1197289 config.go:182] Loaded profile config "multinode-270339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 00:55:02.811371 1197289 status.go:255] checking status of multinode-270339 ...
	I1212 00:55:02.811977 1197289 cli_runner.go:164] Run: docker container inspect multinode-270339 --format={{.State.Status}}
	I1212 00:55:02.830789 1197289 status.go:330] multinode-270339 host status = "Stopped" (err=<nil>)
	I1212 00:55:02.830808 1197289 status.go:343] host is not running, skipping remaining checks
	I1212 00:55:02.830815 1197289 status.go:257] multinode-270339 status: &{Name:multinode-270339 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:55:02.830852 1197289 status.go:255] checking status of multinode-270339-m02 ...
	I1212 00:55:02.831149 1197289 cli_runner.go:164] Run: docker container inspect multinode-270339-m02 --format={{.State.Status}}
	I1212 00:55:02.848607 1197289 status.go:330] multinode-270339-m02 host status = "Stopped" (err=<nil>)
	I1212 00:55:02.848625 1197289 status.go:343] host is not running, skipping remaining checks
	I1212 00:55:02.848678 1197289 status.go:257] multinode-270339-m02 status: &{Name:multinode-270339-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-270339 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-270339 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m23.915969689s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-270339 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-270339
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-270339-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-270339-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.94291ms)

                                                
                                                
-- stdout --
	* [multinode-270339-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-270339-m02' is duplicated with machine name 'multinode-270339-m02' in profile 'multinode-270339'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-270339-m03 --driver=docker  --container-runtime=crio
E1212 00:56:43.197293 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-270339-m03 --driver=docker  --container-runtime=crio: (30.227720896s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-270339
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-270339: exit status 80 (374.421851ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-270339
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-270339-m03 already exists in multinode-270339-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-270339-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-270339-m03: (2.05420348s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.82s)

                                                
                                    
x
+
TestPreload (179.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-373391 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1212 00:58:17.506316 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-373391 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m26.834545947s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-373391 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-373391 image pull gcr.io/k8s-minikube/busybox: (2.343171425s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-373391
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-373391: (5.812401501s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-373391 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1212 00:59:31.004331 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-373391 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m21.610523031s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-373391 image list
helpers_test.go:175: Cleaning up "test-preload-373391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-373391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-373391: (2.470780017s)
--- PASS: TestPreload (179.34s)

                                                
                                    
x
+
TestScheduledStopUnix (107.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-674059 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-674059 --memory=2048 --driver=docker  --container-runtime=crio: (30.988608214s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-674059 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-674059 -n scheduled-stop-674059
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-674059 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-674059 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-674059 -n scheduled-stop-674059
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-674059
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-674059 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1212 01:01:43.197309 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-674059
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-674059: exit status 7 (82.912381ms)

                                                
                                                
-- stdout --
	scheduled-stop-674059
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-674059 -n scheduled-stop-674059
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-674059 -n scheduled-stop-674059: exit status 7 (83.093726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-674059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-674059
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-674059: (4.971035673s)
--- PASS: TestScheduledStopUnix (107.77s)

                                                
                                    
x
+
TestInsufficientStorage (11.01s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-550457 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-550457 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.409398405s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"32485d93-beb1-4ff5-b0b8-104efcd33b84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-550457] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4166f745-85c6-4661-9c68-0c8890c946b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17764"}}
	{"specversion":"1.0","id":"d0ec7f9d-0ebb-458b-9a3e-5dd53adc331c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5b30cf1b-7e1a-4422-b92e-2c25a3467ee0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig"}}
	{"specversion":"1.0","id":"1991847d-7602-4043-841c-0f2a9bcbf9d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube"}}
	{"specversion":"1.0","id":"96e59dad-7caf-4b05-a292-9d6651d141f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4c618aff-9761-4c21-9384-c07a4f77d313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2411db56-e66e-4a4a-ab31-df6062dea059","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1c960763-b8ac-48e5-a9fd-1057d2b3a5b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"16a5bd71-3e5d-4be0-974b-6586ca21c7fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"397c687f-a356-440a-8fe1-905d74b3d7ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f13b0129-063c-4436-bfec-45d5c8787427","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-550457 in cluster insufficient-storage-550457","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"37011e17-c0d8-44bb-a225-61a9f1ebaa5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"168b5241-0c08-460a-8213-0c5e0a4c6cbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1eac7f5-aa19-4389-86ca-0c9a97cad0d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-550457 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-550457 --output=json --layout=cluster: exit status 7 (333.714367ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-550457","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-550457","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:02:02.564475 1213897 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-550457" does not appear in /home/jenkins/minikube-integration/17764-1111943/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-550457 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-550457 --output=json --layout=cluster: exit status 7 (320.802139ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-550457","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-550457","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:02:02.887162 1213949 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-550457" does not appear in /home/jenkins/minikube-integration/17764-1111943/kubeconfig
	E1212 01:02:02.899117 1213949 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/insufficient-storage-550457/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-550457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-550457
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-550457: (1.948754664s)
--- PASS: TestInsufficientStorage (11.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (384.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-147551 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 01:03:17.506385 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-147551 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m10.847311073s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-147551
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-147551: (2.724252446s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-147551 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-147551 status --format={{.Host}}: exit status 7 (80.599822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-147551 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 01:04:31.004054 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-147551 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m41.246901213s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-147551 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-147551 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-147551 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (106.695344ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-147551] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-147551
	    minikube start -p kubernetes-upgrade-147551 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1475512 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-147551 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-147551 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-147551 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.235099027s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-147551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-147551
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-147551: (2.561099509s)
--- PASS: TestKubernetesUpgrade (384.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476799 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-476799 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (93.385152ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-476799] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476799 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476799 --driver=docker  --container-runtime=crio: (39.170646473s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-476799 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476799 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476799 --no-kubernetes --driver=docker  --container-runtime=crio: (6.954944606s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-476799 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-476799 status -o json: exit status 2 (378.596509ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-476799","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-476799
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-476799: (2.521347499s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476799 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476799 --no-kubernetes --driver=docker  --container-runtime=crio: (9.840215775s)
--- PASS: TestNoKubernetes/serial/Start (9.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-476799 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-476799 "sudo systemctl is-active --quiet service kubelet": exit status 1 (371.616329ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-476799
E1212 01:03:06.241321 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-476799: (1.265049697s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476799 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476799 --driver=docker  --container-runtime=crio: (7.709233325s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-476799 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-476799 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.92389ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-966595
E1212 01:06:43.197490 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestPause/serial/Start (77.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-515559 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1212 01:08:17.506559 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-515559 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m17.223635542s)
--- PASS: TestPause/serial/Start (77.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-515559 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 01:09:31.003305 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-515559 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.585880818s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.62s)

                                                
                                    
x
+
TestPause/serial/Pause (1.17s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-515559 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-515559 --alsologtostderr -v=5: (1.174109897s)
--- PASS: TestPause/serial/Pause (1.17s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-515559 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-515559 --output=json --layout=cluster: exit status 2 (453.381181ms)

                                                
                                                
-- stdout --
	{"Name":"pause-515559","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-515559","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-515559 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-515559 --alsologtostderr -v=5: (1.070239159s)
--- PASS: TestPause/serial/Unpause (1.07s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.36s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-515559 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-515559 --alsologtostderr -v=5: (1.355478731s)
--- PASS: TestPause/serial/PauseAgain (1.36s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-515559 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-515559 --alsologtostderr -v=5: (2.952277729s)
--- PASS: TestPause/serial/DeletePaused (2.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-515559
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-515559: exit status 1 (16.874983ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-515559: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-217400 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-217400 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (371.735396ms)

                                                
                                                
-- stdout --
	* [false-217400] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:10:13.651418 1252096 out.go:296] Setting OutFile to fd 1 ...
	I1212 01:10:13.651598 1252096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 01:10:13.651609 1252096 out.go:309] Setting ErrFile to fd 2...
	I1212 01:10:13.651615 1252096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 01:10:13.651852 1252096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-1111943/.minikube/bin
	I1212 01:10:13.652272 1252096 out.go:303] Setting JSON to false
	I1212 01:10:13.653231 1252096 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28360,"bootTime":1702315054,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 01:10:13.655965 1252096 start.go:138] virtualization:  
	I1212 01:10:13.659207 1252096 out.go:177] * [false-217400] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1212 01:10:13.662151 1252096 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 01:10:13.664658 1252096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:10:13.662354 1252096 notify.go:220] Checking for updates...
	I1212 01:10:13.670082 1252096 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-1111943/kubeconfig
	I1212 01:10:13.672588 1252096 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-1111943/.minikube
	I1212 01:10:13.675012 1252096 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:10:13.677394 1252096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:10:13.680375 1252096 config.go:182] Loaded profile config "force-systemd-flag-687419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 01:10:13.680549 1252096 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 01:10:13.732691 1252096 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 01:10:13.732806 1252096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:10:13.874242 1252096 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-12 01:10:13.859674743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1212 01:10:13.874348 1252096 docker.go:295] overlay module found
	I1212 01:10:13.876751 1252096 out.go:177] * Using the docker driver based on user configuration
	I1212 01:10:13.878851 1252096 start.go:298] selected driver: docker
	I1212 01:10:13.878878 1252096 start.go:902] validating driver "docker" against <nil>
	I1212 01:10:13.878892 1252096 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:10:13.881577 1252096 out.go:177] 
	W1212 01:10:13.883819 1252096 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 01:10:13.885843 1252096 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-217400 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-217400" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-217400

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-217400"

                                                
                                                
----------------------- debugLogs end: false-217400 [took: 5.641205785s] --------------------------------
helpers_test.go:175: Cleaning up "false-217400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-217400
--- PASS: TestNetworkPlugins/group/false (6.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (120.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-096856 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1212 01:13:17.507725 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-096856 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m0.390953161s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (120.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-096856 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9cf35964-7cac-4a64-8915-160bedcac01f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9cf35964-7cac-4a64-8915-160bedcac01f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.031639978s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-096856 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-096856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-096856 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-096856 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-096856 --alsologtostderr -v=3: (12.104694402s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-096856 -n old-k8s-version-096856
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-096856 -n old-k8s-version-096856: exit status 7 (93.633032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-096856 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (432.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-096856 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1212 01:14:31.003085 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-096856 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m12.229479351s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-096856 -n old-k8s-version-096856
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (432.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-657387 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-657387 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m10.30693276s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-657387 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d22beb35-e661-4c82-a2e7-6a387a86efac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d22beb35-e661-4c82-a2e7-6a387a86efac] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.030527436s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-657387 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-657387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-657387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037046305s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-657387 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-657387 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-657387 --alsologtostderr -v=3: (12.050507189s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-657387 -n no-preload-657387
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-657387 -n no-preload-657387: exit status 7 (98.979739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-657387 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (352.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-657387 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1212 01:16:43.197820 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 01:18:17.506921 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 01:19:14.050391 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 01:19:31.003266 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 01:19:46.242037 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-657387 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m52.135079976s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-657387 -n no-preload-657387
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (352.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-25f9r" [be0fad1d-f81a-4d4b-a819-ef777e85de30] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023352405s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-25f9r" [be0fad1d-f81a-4d4b-a819-ef777e85de30] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014291978s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-096856 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-096856 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-096856 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-096856 --alsologtostderr -v=1: (1.15076367s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-096856 -n old-k8s-version-096856
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-096856 -n old-k8s-version-096856: exit status 2 (574.929372ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-096856 -n old-k8s-version-096856
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-096856 -n old-k8s-version-096856: exit status 2 (565.119837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-096856 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-096856 --alsologtostderr -v=1: (1.264585895s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-096856 -n old-k8s-version-096856
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-096856 -n old-k8s-version-096856
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-604419 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1212 01:21:43.196974 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-604419 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m26.770198255s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ngrqv" [30b8d121-213f-4619-8bd0-690de0bbf672] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ngrqv" [30b8d121-213f-4619-8bd0-690de0bbf672] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.047422333s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ngrqv" [30b8d121-213f-4619-8bd0-690de0bbf672] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014503784s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-657387 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-657387 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-657387 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-657387 -n no-preload-657387
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-657387 -n no-preload-657387: exit status 2 (397.774167ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-657387 -n no-preload-657387
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-657387 -n no-preload-657387: exit status 2 (385.436878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-657387 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-657387 -n no-preload-657387
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-657387 -n no-preload-657387
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-846571 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-846571 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m22.680835398s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-604419 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1e768966-355f-4b64-9951-9ffe8d95643f] Pending
helpers_test.go:344: "busybox" [1e768966-355f-4b64-9951-9ffe8d95643f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1e768966-355f-4b64-9951-9ffe8d95643f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.029356843s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-604419 exec busybox -- /bin/sh -c "ulimit -n"
E1212 01:23:17.506827 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-604419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-604419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.778964911s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-604419 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-604419 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-604419 --alsologtostderr -v=3: (12.155599147s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-604419 -n embed-certs-604419
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-604419 -n embed-certs-604419: exit status 7 (95.265648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-604419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (346.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-604419 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1212 01:23:47.844064 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:23:47.849327 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:23:47.859598 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:23:47.879866 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:23:47.920204 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:23:48.002957 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:23:48.163657 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:23:48.484390 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:23:49.124689 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:23:50.404838 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:23:52.965474 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-604419 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m45.409818738s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-604419 -n embed-certs-604419
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (346.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-846571 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f01b8e6c-a291-471d-92c3-dd3b5bff5fbd] Pending
helpers_test.go:344: "busybox" [f01b8e6c-a291-471d-92c3-dd3b5bff5fbd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f01b8e6c-a291-471d-92c3-dd3b5bff5fbd] Running
E1212 01:23:58.086185 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.050363564s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-846571 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-846571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-846571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.1186185s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-846571 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-846571 --alsologtostderr -v=3
E1212 01:24:08.326802 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-846571 --alsologtostderr -v=3: (12.062437168s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-846571 -n default-k8s-diff-port-846571
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-846571 -n default-k8s-diff-port-846571: exit status 7 (88.428333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-846571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (356.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-846571 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1212 01:24:28.807340 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:24:31.004034 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
E1212 01:25:09.767573 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:25:50.687197 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:25:50.692626 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:25:50.702935 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:25:50.723252 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:25:50.763503 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:25:50.843822 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:25:51.004202 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:25:51.325107 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:25:51.965755 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:25:53.246304 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:25:55.806793 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:26:00.927323 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:26:11.168273 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:26:31.649162 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:26:31.688365 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:26:43.197654 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
E1212 01:27:12.609386 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:28:00.553962 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 01:28:17.506429 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
E1212 01:28:34.529597 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:28:47.843548 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:29:15.528679 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-846571 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m55.324306297s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-846571 -n default-k8s-diff-port-846571
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (356.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-t486c" [07cafec5-b3e8-45bc-8b38-e2644074f3aa] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-t486c" [07cafec5-b3e8-45bc-8b38-e2644074f3aa] Running
E1212 01:29:31.003522 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.059772227s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-t486c" [07cafec5-b3e8-45bc-8b38-e2644074f3aa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011934374s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-604419 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-604419 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-604419 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-604419 --alsologtostderr -v=1: (1.083888681s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-604419 -n embed-certs-604419
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-604419 -n embed-certs-604419: exit status 2 (452.170933ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-604419 -n embed-certs-604419
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-604419 -n embed-certs-604419: exit status 2 (473.342535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-604419 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-604419 -n embed-certs-604419
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-604419 -n embed-certs-604419
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-493696 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-493696 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (57.089519948s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (20.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fsxgf" [ac074c30-fc12-4e26-b831-0b3d10009583] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fsxgf" [ac074c30-fc12-4e26-b831-0b3d10009583] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.116020386s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (20.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fsxgf" [ac074c30-fc12-4e26-b831-0b3d10009583] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01017933s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-846571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-846571 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-846571 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-846571 --alsologtostderr -v=1: (1.021804173s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-846571 -n default-k8s-diff-port-846571
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-846571 -n default-k8s-diff-port-846571: exit status 2 (429.363376ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-846571 -n default-k8s-diff-port-846571
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-846571 -n default-k8s-diff-port-846571: exit status 2 (486.297475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-846571 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-846571 --alsologtostderr -v=1: (1.302265864s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-846571 -n default-k8s-diff-port-846571
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-846571 -n default-k8s-diff-port-846571
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-493696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-493696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.969228112s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-493696 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-493696 --alsologtostderr -v=3: (1.757768224s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-493696 -n newest-cni-493696
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-493696 -n newest-cni-493696: exit status 7 (102.106674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-493696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-493696 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-493696 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (36.617246973s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-493696 -n newest-cni-493696
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1212 01:30:50.687189 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:31:18.370510 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m22.360982066s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-493696 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-493696 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-493696 -n newest-cni-493696
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-493696 -n newest-cni-493696: exit status 2 (385.632296ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-493696 -n newest-cni-493696
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-493696 -n newest-cni-493696: exit status 2 (407.243022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-493696 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-493696 -n newest-cni-493696
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-493696 -n newest-cni-493696
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.51s)
E1212 01:37:31.390466 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/auto-217400/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1212 01:31:43.197086 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m21.711645757s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-217400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-217400 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-chmfd" [36bfe19e-bba9-4059-9f40-515dc6c14320] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-chmfd" [36bfe19e-bba9-4059-9f40-515dc6c14320] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.015240272s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-217400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m21.012388443s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-55wpd" [aa535f12-009a-4f25-8495-0d01a6fe4aa8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.032310909s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-217400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-217400 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rff8q" [1856f008-4d2d-45c7-b0fb-d561498bba5a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rff8q" [1856f008-4d2d-45c7-b0fb-d561498bba5a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.010846404s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-217400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1212 01:33:47.843270 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/old-k8s-version-096856/client.crt: no such file or directory
E1212 01:33:53.852586 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:33:53.857843 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:33:53.868103 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:33:53.888367 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:33:53.928614 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:33:54.008890 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:33:54.169208 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:33:54.489417 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:33:55.130123 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:33:56.410585 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:33:58.970854 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
E1212 01:34:04.091481 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m12.927527625s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lxk98" [8ab9336d-f75e-4e04-b6e9-224590467a32] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.05682541s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-217400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-217400 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-45r7q" [5a82eb20-24ce-4814-b69d-c72d3981faf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:34:14.332063 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/default-k8s-diff-port-846571/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-45r7q" [5a82eb20-24ce-4814-b69d-c72d3981faf3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.023618283s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-217400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m33.487790791s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-217400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-217400 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gzwhf" [4bc0da32-d9c3-4ce0-87da-b21f7c86f77c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gzwhf" [4bc0da32-d9c3-4ce0-87da-b21f7c86f77c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.011973776s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-217400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1212 01:35:50.687194 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/no-preload-657387/client.crt: no such file or directory
E1212 01:35:54.051391 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/addons-513852/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m9.261434118s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-217400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-217400 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vzltl" [2ea09e8d-ac4f-4fcb-9617-212bbfbcb061] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:36:26.242247 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vzltl" [2ea09e8d-ac4f-4fcb-9617-212bbfbcb061] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.010715222s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-217400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5qdxq" [1285f03e-1086-4736-8203-ac2db1e30694] Running
E1212 01:36:43.197057 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/ingress-addon-legacy-996779/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.041754505s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-217400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-217400 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6vsk4" [8ab9f04a-64e0-4e8a-8e30-caf7015f13a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6vsk4" [8ab9f04a-64e0-4e8a-8e30-caf7015f13a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.015529781s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-217400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-217400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (50.284940885s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-217400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-217400 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zsd7b" [f802142f-acbe-4626-87b9-4d7a86aeed36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:37:51.871140 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/auto-217400/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-zsd7b" [f802142f-acbe-4626-87b9-4d7a86aeed36] Running
E1212 01:37:52.530124 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:37:52.535578 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:37:52.545867 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:37:52.566205 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:37:52.606460 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:37:52.686823 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:37:52.847268 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:37:53.167910 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:37:53.808578 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:37:55.089347 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:37:57.649939 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.010681335s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-217400 exec deployment/netcat -- nslookup kubernetes.default
E1212 01:38:02.770542 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
E1212 01:38:13.010707 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/kindnet-217400/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-217400 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.189359147s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-217400 exec deployment/netcat -- nslookup kubernetes.default
E1212 01:38:17.506984 1117383 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/functional-885247/client.crt: no such file or directory
net_test.go:175: (dbg) Done: kubectl --context bridge-217400 exec deployment/netcat -- nslookup kubernetes.default: (5.175339458s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-217400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (32/314)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.64s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-765600 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-765600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-765600
--- SKIP: TestDownloadOnlyKic (0.64s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-651263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-651263
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-217400 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-217400" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-217400

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-217400"

                                                
                                                
----------------------- debugLogs end: kubenet-217400 [took: 6.493517042s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-217400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-217400
--- SKIP: TestNetworkPlugins/group/kubenet (6.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-217400 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-217400" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17764-1111943/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 01:10:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: force-systemd-flag-687419
contexts:
- context:
cluster: force-systemd-flag-687419
extensions:
- extension:
last-update: Tue, 12 Dec 2023 01:10:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-flag-687419
name: force-systemd-flag-687419
current-context: force-systemd-flag-687419
kind: Config
preferences: {}
users:
- name: force-systemd-flag-687419
user:
client-certificate: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/force-systemd-flag-687419/client.crt
client-key: /home/jenkins/minikube-integration/17764-1111943/.minikube/profiles/force-systemd-flag-687419/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-217400

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-217400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-217400"

                                                
                                                
----------------------- debugLogs end: cilium-217400 [took: 5.642766078s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-217400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-217400
--- SKIP: TestNetworkPlugins/group/cilium (5.85s)

                                                
                                    
Copied to clipboard