Test Report: Docker_Linux_crio 17488

                    
                      292152b7ba2fff47063f7712cda18987a57d80fb:2023-10-25:31605
                    
                

Test fail (6/308)

Order failed test Duration
28 TestAddons/parallel/Ingress 157.67
34 TestAddons/parallel/Headlamp 2.75
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 175.79
209 TestMultiNode/serial/PingHostFrom2Pods 3.02
230 TestRunningBinaryUpgrade 70.73
256 TestStoppedBinaryUpgrade/Upgrade 77.29
x
+
TestAddons/parallel/Ingress (157.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-276457 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context addons-276457 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (2.871377745s)
addons_test.go:231: (dbg) Run:  kubectl --context addons-276457 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:231: (dbg) Non-zero exit: kubectl --context addons-276457 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (180.091375ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.110.255.147:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:231: (dbg) Run:  kubectl --context addons-276457 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:231: (dbg) Non-zero exit: kubectl --context addons-276457 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (150.104773ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.110.255.147:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:231: (dbg) Run:  kubectl --context addons-276457 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-276457 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [93522c0a-ba78-4861-a994-d40daa30a0c3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [93522c0a-ba78-4861-a994-d40daa30a0c3] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.028251699s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-276457 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.710177709s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-276457 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-276457 addons disable ingress-dns --alsologtostderr -v=1: (1.098692214s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-276457 addons disable ingress --alsologtostderr -v=1: (7.588032544s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-276457
helpers_test.go:235: (dbg) docker inspect addons-276457:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359",
	        "Created": "2023-10-25T21:11:34.012909531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19897,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-25T21:11:34.311715955Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359/hosts",
	        "LogPath": "/var/lib/docker/containers/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359-json.log",
	        "Name": "/addons-276457",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-276457:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-276457",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d8cee1d16751d821163918220fe6d87821b97f56ce269eba65b96107ddc32555-init/diff:/var/lib/docker/overlay2/08f48c2099646ae35740a1c0f07609c9eefd4a79bbbda6d2c067385f70ad62be/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8cee1d16751d821163918220fe6d87821b97f56ce269eba65b96107ddc32555/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8cee1d16751d821163918220fe6d87821b97f56ce269eba65b96107ddc32555/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8cee1d16751d821163918220fe6d87821b97f56ce269eba65b96107ddc32555/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-276457",
	                "Source": "/var/lib/docker/volumes/addons-276457/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-276457",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-276457",
	                "name.minikube.sigs.k8s.io": "addons-276457",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c49c12eb175ca66a7f1c77a210afd495bdfada186779c43fc500aebe65e2d5d6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c49c12eb175c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-276457": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8f1787a5dda6",
	                        "addons-276457"
	                    ],
	                    "NetworkID": "ae6db73bce4272b8f387205e6fdf52e5e623531737d5981b3d82412778f26063",
	                    "EndpointID": "bf50b40c9bc4aecf2b8883464e01e67b95e44d9f6c6463ef5d06e0de62a7dbc6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-276457 -n addons-276457
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-276457 logs -n 25: (1.149092797s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-868023                                                                     | download-only-868023   | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:11 UTC |
	| delete  | -p download-only-868023                                                                     | download-only-868023   | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:11 UTC |
	| start   | --download-only -p                                                                          | download-docker-264376 | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |                     |
	|         | download-docker-264376                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-264376                                                                   | download-docker-264376 | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:11 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-856759   | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |                     |
	|         | binary-mirror-856759                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40837                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-856759                                                                     | binary-mirror-856759   | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:11 UTC |
	| addons  | disable dashboard -p                                                                        | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |                     |
	|         | addons-276457                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |                     |
	|         | addons-276457                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-276457 --wait=true                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | addons-276457                                                                               |                        |         |         |                     |                     |
	| addons  | addons-276457 addons disable                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-276457 ip                                                                            | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	| addons  | addons-276457 addons disable                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-276457 ssh cat                                                                       | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | /opt/local-path-provisioner/pvc-b62d5b0e-4bb9-43b8-94d3-0062132da2ef_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | -p addons-276457                                                                            |                        |         |         |                     |                     |
	| addons  | addons-276457 addons disable                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:14 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-276457 ssh curl -s                                                                   | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | addons-276457                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC |                     |
	|         | -p addons-276457                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-276457 addons                                                                        | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-276457 addons                                                                        | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:14 UTC | 25 Oct 23 21:14 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-276457 addons                                                                        | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:14 UTC | 25 Oct 23 21:14 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-276457 ip                                                                            | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:15 UTC | 25 Oct 23 21:15 UTC |
	| addons  | addons-276457 addons disable                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:15 UTC | 25 Oct 23 21:15 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-276457 addons disable                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:15 UTC | 25 Oct 23 21:16 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:11:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:11:10.050783   19225 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:11:10.050950   19225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:11:10.050962   19225 out.go:309] Setting ErrFile to fd 2...
	I1025 21:11:10.050970   19225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:11:10.051164   19225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:11:10.051828   19225 out.go:303] Setting JSON to false
	I1025 21:11:10.052679   19225 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3219,"bootTime":1698265051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:11:10.052740   19225 start.go:138] virtualization: kvm guest
	I1025 21:11:10.054996   19225 out.go:177] * [addons-276457] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:11:10.056579   19225 notify.go:220] Checking for updates...
	I1025 21:11:10.056596   19225 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:11:10.057963   19225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:11:10.059324   19225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:11:10.060885   19225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:11:10.062234   19225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:11:10.063553   19225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:11:10.064991   19225 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:11:10.084355   19225 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:11:10.084414   19225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:11:10.133669   19225 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-10-25 21:11:10.125557202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:11:10.133776   19225 docker.go:295] overlay module found
	I1025 21:11:10.135786   19225 out.go:177] * Using the docker driver based on user configuration
	I1025 21:11:10.137496   19225 start.go:298] selected driver: docker
	I1025 21:11:10.137512   19225 start.go:902] validating driver "docker" against <nil>
	I1025 21:11:10.137522   19225 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:11:10.138222   19225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:11:10.183559   19225 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-10-25 21:11:10.175934542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:11:10.183747   19225 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 21:11:10.183960   19225 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:11:10.185920   19225 out.go:177] * Using Docker driver with root privileges
	I1025 21:11:10.187646   19225 cni.go:84] Creating CNI manager for ""
	I1025 21:11:10.187664   19225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:11:10.187677   19225 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 21:11:10.187708   19225 start_flags.go:323] config:
	{Name:addons-276457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-276457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:11:10.189408   19225 out.go:177] * Starting control plane node addons-276457 in cluster addons-276457
	I1025 21:11:10.190778   19225 cache.go:121] Beginning downloading kic base image for docker with crio
	I1025 21:11:10.192127   19225 out.go:177] * Pulling base image ...
	I1025 21:11:10.193422   19225 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:11:10.193453   19225 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1025 21:11:10.193465   19225 cache.go:56] Caching tarball of preloaded images
	I1025 21:11:10.193517   19225 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:11:10.193571   19225 preload.go:174] Found /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 21:11:10.193585   19225 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1025 21:11:10.193926   19225 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/config.json ...
	I1025 21:11:10.193951   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/config.json: {Name:mk3778d29ed7a141fa579ee04d35ac0a42340c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:10.208155   19225 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 21:11:10.208264   19225 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 21:11:10.208280   19225 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1025 21:11:10.208285   19225 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1025 21:11:10.208295   19225 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1025 21:11:10.208300   19225 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from local cache
	I1025 21:11:21.137474   19225 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from cached tarball
	I1025 21:11:21.137507   19225 cache.go:194] Successfully downloaded all kic artifacts
	I1025 21:11:21.137534   19225 start.go:365] acquiring machines lock for addons-276457: {Name:mka6aae137d3f666d1cab21763ad542057ba8ff4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:11:21.137621   19225 start.go:369] acquired machines lock for "addons-276457" in 70.356µs
	I1025 21:11:21.137648   19225 start.go:93] Provisioning new machine with config: &{Name:addons-276457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-276457 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:11:21.137717   19225 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:11:21.139838   19225 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1025 21:11:21.140059   19225 start.go:159] libmachine.API.Create for "addons-276457" (driver="docker")
	I1025 21:11:21.140083   19225 client.go:168] LocalClient.Create starting
	I1025 21:11:21.140180   19225 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem
	I1025 21:11:21.266029   19225 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem
	I1025 21:11:21.474841   19225 cli_runner.go:164] Run: docker network inspect addons-276457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:11:21.489833   19225 cli_runner.go:211] docker network inspect addons-276457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:11:21.489889   19225 network_create.go:281] running [docker network inspect addons-276457] to gather additional debugging logs...
	I1025 21:11:21.489909   19225 cli_runner.go:164] Run: docker network inspect addons-276457
	W1025 21:11:21.503435   19225 cli_runner.go:211] docker network inspect addons-276457 returned with exit code 1
	I1025 21:11:21.503460   19225 network_create.go:284] error running [docker network inspect addons-276457]: docker network inspect addons-276457: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-276457 not found
	I1025 21:11:21.503476   19225 network_create.go:286] output of [docker network inspect addons-276457]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-276457 not found
	
	** /stderr **
	I1025 21:11:21.503563   19225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:11:21.517895   19225 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023eaa20}
	I1025 21:11:21.517936   19225 network_create.go:124] attempt to create docker network addons-276457 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:11:21.517970   19225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-276457 addons-276457
	I1025 21:11:21.566768   19225 network_create.go:108] docker network addons-276457 192.168.49.0/24 created
	I1025 21:11:21.566795   19225 kic.go:118] calculated static IP "192.168.49.2" for the "addons-276457" container
	I1025 21:11:21.566845   19225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:11:21.581214   19225 cli_runner.go:164] Run: docker volume create addons-276457 --label name.minikube.sigs.k8s.io=addons-276457 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:11:21.597035   19225 oci.go:103] Successfully created a docker volume addons-276457
	I1025 21:11:21.597118   19225 cli_runner.go:164] Run: docker run --rm --name addons-276457-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-276457 --entrypoint /usr/bin/test -v addons-276457:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 21:11:28.812221   19225 cli_runner.go:217] Completed: docker run --rm --name addons-276457-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-276457 --entrypoint /usr/bin/test -v addons-276457:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (7.215052344s)
	I1025 21:11:28.812255   19225 oci.go:107] Successfully prepared a docker volume addons-276457
	I1025 21:11:28.812281   19225 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:11:28.812303   19225 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 21:11:28.812359   19225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-276457:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 21:11:33.947727   19225 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-276457:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.135303636s)
	I1025 21:11:33.947758   19225 kic.go:200] duration metric: took 5.135453 seconds to extract preloaded images to volume
	W1025 21:11:33.947908   19225 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 21:11:33.948005   19225 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 21:11:33.999394   19225 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-276457 --name addons-276457 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-276457 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-276457 --network addons-276457 --ip 192.168.49.2 --volume addons-276457:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 21:11:34.319460   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Running}}
	I1025 21:11:34.337242   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:11:34.353775   19225 cli_runner.go:164] Run: docker exec addons-276457 stat /var/lib/dpkg/alternatives/iptables
	I1025 21:11:34.392061   19225 oci.go:144] the created container "addons-276457" has a running status.
	I1025 21:11:34.392097   19225 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa...
	I1025 21:11:34.624266   19225 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 21:11:34.645085   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:11:34.667614   19225 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 21:11:34.667635   19225 kic_runner.go:114] Args: [docker exec --privileged addons-276457 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 21:11:34.739948   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:11:34.756615   19225 machine.go:88] provisioning docker machine ...
	I1025 21:11:34.756651   19225 ubuntu.go:169] provisioning hostname "addons-276457"
	I1025 21:11:34.756705   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:34.782585   19225 main.go:141] libmachine: Using SSH client type: native
	I1025 21:11:34.782944   19225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1025 21:11:34.782959   19225 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-276457 && echo "addons-276457" | sudo tee /etc/hostname
	I1025 21:11:34.947870   19225 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-276457
	
	I1025 21:11:34.947950   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:34.965434   19225 main.go:141] libmachine: Using SSH client type: native
	I1025 21:11:34.965918   19225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1025 21:11:34.965947   19225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-276457' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-276457/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-276457' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:11:35.081823   19225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:11:35.081848   19225 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17488-11542/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-11542/.minikube}
	I1025 21:11:35.081870   19225 ubuntu.go:177] setting up certificates
	I1025 21:11:35.081879   19225 provision.go:83] configureAuth start
	I1025 21:11:35.081930   19225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-276457
	I1025 21:11:35.097634   19225 provision.go:138] copyHostCerts
	I1025 21:11:35.097692   19225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem (1123 bytes)
	I1025 21:11:35.097792   19225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem (1675 bytes)
	I1025 21:11:35.097889   19225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem (1078 bytes)
	I1025 21:11:35.097934   19225 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem org=jenkins.addons-276457 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-276457]
	I1025 21:11:35.320075   19225 provision.go:172] copyRemoteCerts
	I1025 21:11:35.320122   19225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:11:35.320151   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.335869   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:11:35.426027   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 21:11:35.445848   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 21:11:35.465535   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1025 21:11:35.484665   19225 provision.go:86] duration metric: configureAuth took 402.773332ms
	I1025 21:11:35.484690   19225 ubuntu.go:193] setting minikube options for container-runtime
	I1025 21:11:35.484848   19225 config.go:182] Loaded profile config "addons-276457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:11:35.484953   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.500482   19225 main.go:141] libmachine: Using SSH client type: native
	I1025 21:11:35.500843   19225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1025 21:11:35.500862   19225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 21:11:35.699473   19225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 21:11:35.699502   19225 machine.go:91] provisioned docker machine in 942.864685ms
	I1025 21:11:35.699514   19225 client.go:171] LocalClient.Create took 14.559422537s
	I1025 21:11:35.699531   19225 start.go:167] duration metric: libmachine.API.Create for "addons-276457" took 14.559471187s
	I1025 21:11:35.699540   19225 start.go:300] post-start starting for "addons-276457" (driver="docker")
	I1025 21:11:35.699554   19225 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:11:35.699634   19225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:11:35.699685   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.715270   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:11:35.802021   19225 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:11:35.804742   19225 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 21:11:35.804770   19225 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 21:11:35.804779   19225 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 21:11:35.804785   19225 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 21:11:35.804793   19225 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/addons for local assets ...
	I1025 21:11:35.804838   19225 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/files for local assets ...
	I1025 21:11:35.804860   19225 start.go:303] post-start completed in 105.312939ms
	I1025 21:11:35.805100   19225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-276457
	I1025 21:11:35.820277   19225 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/config.json ...
	I1025 21:11:35.820493   19225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:11:35.820529   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.835830   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:11:35.922515   19225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:11:35.926315   19225 start.go:128] duration metric: createHost completed in 14.788586203s
	I1025 21:11:35.926333   19225 start.go:83] releasing machines lock for "addons-276457", held for 14.788701726s
	I1025 21:11:35.926409   19225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-276457
	I1025 21:11:35.941586   19225 ssh_runner.go:195] Run: cat /version.json
	I1025 21:11:35.941626   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.941664   19225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:11:35.941717   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.958635   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:11:35.959381   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:11:36.142609   19225 ssh_runner.go:195] Run: systemctl --version
	I1025 21:11:36.146463   19225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 21:11:36.280603   19225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 21:11:36.284615   19225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:11:36.300625   19225 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1025 21:11:36.300712   19225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:11:36.325528   19225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 21:11:36.325554   19225 start.go:472] detecting cgroup driver to use...
	I1025 21:11:36.325593   19225 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 21:11:36.325637   19225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 21:11:36.337875   19225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:11:36.347343   19225 docker.go:198] disabling cri-docker service (if available) ...
	I1025 21:11:36.347389   19225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:11:36.358902   19225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:11:36.370762   19225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 21:11:36.446974   19225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:11:36.530660   19225 docker.go:214] disabling docker service ...
	I1025 21:11:36.530727   19225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:11:36.546727   19225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:11:36.556324   19225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:11:36.633008   19225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:11:36.710165   19225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:11:36.720097   19225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:11:36.733193   19225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 21:11:36.733237   19225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:11:36.741031   19225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 21:11:36.741079   19225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:11:36.748698   19225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:11:36.756434   19225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:11:36.764403   19225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 21:11:36.771721   19225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 21:11:36.778419   19225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 21:11:36.784921   19225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:11:36.855096   19225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 21:11:36.964048   19225 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 21:11:36.964145   19225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 21:11:36.967359   19225 start.go:540] Will wait 60s for crictl version
	I1025 21:11:36.967399   19225 ssh_runner.go:195] Run: which crictl
	I1025 21:11:36.970102   19225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 21:11:37.000915   19225 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1025 21:11:37.001025   19225 ssh_runner.go:195] Run: crio --version
	I1025 21:11:37.032435   19225 ssh_runner.go:195] Run: crio --version
	I1025 21:11:37.065447   19225 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1025 21:11:37.066955   19225 cli_runner.go:164] Run: docker network inspect addons-276457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:11:37.082427   19225 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 21:11:37.085878   19225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:11:37.095445   19225 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:11:37.095513   19225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:11:37.147911   19225 crio.go:496] all images are preloaded for cri-o runtime.
	I1025 21:11:37.147936   19225 crio.go:415] Images already preloaded, skipping extraction
	I1025 21:11:37.147993   19225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:11:37.176883   19225 crio.go:496] all images are preloaded for cri-o runtime.
	I1025 21:11:37.176902   19225 cache_images.go:84] Images are preloaded, skipping loading
	I1025 21:11:37.176959   19225 ssh_runner.go:195] Run: crio config
	I1025 21:11:37.216315   19225 cni.go:84] Creating CNI manager for ""
	I1025 21:11:37.216334   19225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:11:37.216349   19225 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 21:11:37.216364   19225 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-276457 NodeName:addons-276457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 21:11:37.216480   19225 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-276457"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 21:11:37.216544   19225 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-276457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-276457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 21:11:37.216590   19225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 21:11:37.224247   19225 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 21:11:37.224307   19225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 21:11:37.231565   19225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1025 21:11:37.245897   19225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 21:11:37.260252   19225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1025 21:11:37.274674   19225 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 21:11:37.277425   19225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:11:37.286125   19225 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457 for IP: 192.168.49.2
	I1025 21:11:37.286157   19225 certs.go:190] acquiring lock for shared ca certs: {Name:mk35413dbabac2652d1fa66d4e17d237360108a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.286271   19225 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key
	I1025 21:11:37.366588   19225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt ...
	I1025 21:11:37.366614   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt: {Name:mkefe46340403c86f272053d2be94b125b0e830e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.366771   19225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key ...
	I1025 21:11:37.366781   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key: {Name:mke1b03fa8b0a61edd372405bab4cc2e83047e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.366846   19225 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key
	I1025 21:11:37.582977   19225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt ...
	I1025 21:11:37.583001   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt: {Name:mkfd638367e0523ada76601355cf5b82c5609ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.583157   19225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key ...
	I1025 21:11:37.583167   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key: {Name:mk41a913670aa409f35f53803f3e356eb2c82175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.583262   19225 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.key
	I1025 21:11:37.583274   19225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt with IP's: []
	I1025 21:11:37.649266   19225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt ...
	I1025 21:11:37.649294   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: {Name:mka3fa749f033f7a4bef4f320d595255d33c27bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.649437   19225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.key ...
	I1025 21:11:37.649449   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.key: {Name:mk723382aaad916e2596dc57aa70df97172720dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.649508   19225 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key.dd3b5fb2
	I1025 21:11:37.649524   19225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 21:11:37.811523   19225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt.dd3b5fb2 ...
	I1025 21:11:37.811549   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt.dd3b5fb2: {Name:mk6f846e9feb3735bec33b4b77765f793d9a50e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.811692   19225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key.dd3b5fb2 ...
	I1025 21:11:37.811702   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key.dd3b5fb2: {Name:mk0ae9d7e303f74f16bbc9aa8d97d83c2d6be466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.811775   19225 certs.go:337] copying /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt
	I1025 21:11:37.811848   19225 certs.go:341] copying /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key
	I1025 21:11:37.811894   19225 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.key
	I1025 21:11:37.811910   19225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.crt with IP's: []
	I1025 21:11:38.114475   19225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.crt ...
	I1025 21:11:38.114501   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.crt: {Name:mk57616c3a58ba5609f71620261fc4676b8d6794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:38.114640   19225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.key ...
	I1025 21:11:38.114650   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.key: {Name:mk3f4e0dc07bd6d6285ff2e61abd9c57717a9b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:38.114795   19225 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 21:11:38.114827   19225 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem (1078 bytes)
	I1025 21:11:38.114851   19225 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem (1123 bytes)
	I1025 21:11:38.114875   19225 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem (1675 bytes)
	I1025 21:11:38.115371   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 21:11:38.136227   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 21:11:38.155372   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 21:11:38.174962   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 21:11:38.194627   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 21:11:38.213849   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 21:11:38.232677   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 21:11:38.251982   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 21:11:38.271086   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 21:11:38.291176   19225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 21:11:38.306087   19225 ssh_runner.go:195] Run: openssl version
	I1025 21:11:38.310775   19225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 21:11:38.318832   19225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:11:38.321699   19225 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:11:38.321740   19225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:11:38.327743   19225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 21:11:38.336136   19225 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 21:11:38.339362   19225 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 21:11:38.339407   19225 kubeadm.go:404] StartCluster: {Name:addons-276457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-276457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:11:38.339490   19225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 21:11:38.339538   19225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 21:11:38.370511   19225 cri.go:89] found id: ""
	I1025 21:11:38.370573   19225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 21:11:38.377871   19225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 21:11:38.385040   19225 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 21:11:38.385090   19225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 21:11:38.392797   19225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 21:11:38.392843   19225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 21:11:38.465551   19225 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1025 21:11:38.522322   19225 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 21:11:47.437288   19225 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1025 21:11:47.437365   19225 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 21:11:47.437494   19225 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1025 21:11:47.437576   19225 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1025 21:11:47.437644   19225 kubeadm.go:322] OS: Linux
	I1025 21:11:47.437728   19225 kubeadm.go:322] CGROUPS_CPU: enabled
	I1025 21:11:47.437795   19225 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1025 21:11:47.437867   19225 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1025 21:11:47.437926   19225 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1025 21:11:47.438009   19225 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1025 21:11:47.438101   19225 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1025 21:11:47.438180   19225 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1025 21:11:47.438261   19225 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1025 21:11:47.438355   19225 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1025 21:11:47.438456   19225 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 21:11:47.438572   19225 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 21:11:47.438705   19225 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 21:11:47.438813   19225 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 21:11:47.440447   19225 out.go:204]   - Generating certificates and keys ...
	I1025 21:11:47.440551   19225 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 21:11:47.440643   19225 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 21:11:47.440742   19225 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 21:11:47.440834   19225 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 21:11:47.440929   19225 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 21:11:47.441003   19225 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 21:11:47.441079   19225 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 21:11:47.441238   19225 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-276457 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 21:11:47.441309   19225 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 21:11:47.441471   19225 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-276457 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 21:11:47.441580   19225 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 21:11:47.441674   19225 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 21:11:47.441741   19225 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 21:11:47.441816   19225 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 21:11:47.441907   19225 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 21:11:47.442010   19225 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 21:11:47.442109   19225 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 21:11:47.442194   19225 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 21:11:47.442326   19225 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 21:11:47.442423   19225 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 21:11:47.444180   19225 out.go:204]   - Booting up control plane ...
	I1025 21:11:47.444313   19225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 21:11:47.444510   19225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 21:11:47.444599   19225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 21:11:47.444718   19225 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 21:11:47.444844   19225 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 21:11:47.444906   19225 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 21:11:47.445066   19225 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 21:11:47.445144   19225 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001856 seconds
	I1025 21:11:47.445257   19225 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 21:11:47.445387   19225 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 21:11:47.445490   19225 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 21:11:47.445743   19225 kubeadm.go:322] [mark-control-plane] Marking the node addons-276457 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 21:11:47.445792   19225 kubeadm.go:322] [bootstrap-token] Using token: fbrqzi.9feo4t3e7ievi3oe
	I1025 21:11:47.447288   19225 out.go:204]   - Configuring RBAC rules ...
	I1025 21:11:47.447399   19225 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 21:11:47.447483   19225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 21:11:47.447635   19225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 21:11:47.447819   19225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 21:11:47.448014   19225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 21:11:47.448152   19225 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 21:11:47.448315   19225 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 21:11:47.448358   19225 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 21:11:47.448398   19225 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 21:11:47.448404   19225 kubeadm.go:322] 
	I1025 21:11:47.448487   19225 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 21:11:47.448501   19225 kubeadm.go:322] 
	I1025 21:11:47.448609   19225 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 21:11:47.448620   19225 kubeadm.go:322] 
	I1025 21:11:47.448656   19225 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 21:11:47.448743   19225 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 21:11:47.448822   19225 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 21:11:47.448834   19225 kubeadm.go:322] 
	I1025 21:11:47.448921   19225 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1025 21:11:47.448935   19225 kubeadm.go:322] 
	I1025 21:11:47.449003   19225 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 21:11:47.449015   19225 kubeadm.go:322] 
	I1025 21:11:47.449101   19225 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 21:11:47.449189   19225 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 21:11:47.449263   19225 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 21:11:47.449273   19225 kubeadm.go:322] 
	I1025 21:11:47.449360   19225 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 21:11:47.449459   19225 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 21:11:47.449468   19225 kubeadm.go:322] 
	I1025 21:11:47.449565   19225 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fbrqzi.9feo4t3e7ievi3oe \
	I1025 21:11:47.449685   19225 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 \
	I1025 21:11:47.449724   19225 kubeadm.go:322] 	--control-plane 
	I1025 21:11:47.449739   19225 kubeadm.go:322] 
	I1025 21:11:47.449837   19225 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 21:11:47.449849   19225 kubeadm.go:322] 
	I1025 21:11:47.449963   19225 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fbrqzi.9feo4t3e7ievi3oe \
	I1025 21:11:47.450089   19225 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 
	I1025 21:11:47.450105   19225 cni.go:84] Creating CNI manager for ""
	I1025 21:11:47.450115   19225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:11:47.451769   19225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 21:11:47.453178   19225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 21:11:47.457069   19225 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1025 21:11:47.457087   19225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 21:11:47.473600   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 21:11:48.125951   19225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 21:11:48.126037   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:48.126071   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=addons-276457 minikube.k8s.io/updated_at=2023_10_25T21_11_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:48.196147   19225 ops.go:34] apiserver oom_adj: -16
	I1025 21:11:48.196264   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:48.270917   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:48.832635   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:49.332777   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:49.832638   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:50.332067   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:50.832763   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:51.332370   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:51.832991   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:52.332631   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:52.832366   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:53.332282   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:53.832312   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:54.332056   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:54.832425   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:55.332158   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:55.832089   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:56.332186   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:56.832181   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:57.332452   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:57.832486   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:58.332510   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:58.832779   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:59.332202   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:59.832352   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:12:00.332048   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:12:00.394997   19225 kubeadm.go:1081] duration metric: took 12.26901345s to wait for elevateKubeSystemPrivileges.
	I1025 21:12:00.395028   19225 kubeadm.go:406] StartCluster complete in 22.055625096s
	I1025 21:12:00.395047   19225 settings.go:142] acquiring lock: {Name:mkdc9277e8465489704340df47f71845c1a0d579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:12:00.395151   19225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:12:00.395493   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/kubeconfig: {Name:mk64fd87b209032b3c81ef85df6a4de19f21a5bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:12:00.395680   19225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 21:12:00.395715   19225 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1025 21:12:00.395792   19225 addons.go:69] Setting default-storageclass=true in profile "addons-276457"
	I1025 21:12:00.395804   19225 addons.go:69] Setting volumesnapshots=true in profile "addons-276457"
	I1025 21:12:00.395808   19225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-276457"
	I1025 21:12:00.395806   19225 addons.go:69] Setting cloud-spanner=true in profile "addons-276457"
	I1025 21:12:00.395818   19225 addons.go:231] Setting addon volumesnapshots=true in "addons-276457"
	I1025 21:12:00.395833   19225 addons.go:69] Setting inspektor-gadget=true in profile "addons-276457"
	I1025 21:12:00.395847   19225 addons.go:231] Setting addon cloud-spanner=true in "addons-276457"
	I1025 21:12:00.395852   19225 addons.go:231] Setting addon inspektor-gadget=true in "addons-276457"
	I1025 21:12:00.395865   19225 config.go:182] Loaded profile config "addons-276457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:12:00.395873   19225 addons.go:69] Setting registry=true in profile "addons-276457"
	I1025 21:12:00.395897   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.395897   19225 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-276457"
	I1025 21:12:00.395894   19225 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-276457"
	I1025 21:12:00.395905   19225 addons.go:231] Setting addon registry=true in "addons-276457"
	I1025 21:12:00.395911   19225 addons.go:69] Setting helm-tiller=true in profile "addons-276457"
	I1025 21:12:00.395911   19225 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-276457"
	I1025 21:12:00.395919   19225 addons.go:69] Setting ingress=true in profile "addons-276457"
	I1025 21:12:00.395926   19225 addons.go:69] Setting ingress-dns=true in profile "addons-276457"
	I1025 21:12:00.395936   19225 addons.go:231] Setting addon ingress=true in "addons-276457"
	I1025 21:12:00.395936   19225 addons.go:231] Setting addon ingress-dns=true in "addons-276457"
	I1025 21:12:00.395941   19225 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-276457"
	I1025 21:12:00.395951   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.395976   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.395978   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.395983   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.396169   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396169   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396256   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396333   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396393   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396404   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396431   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.395868   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.397164   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.395920   19225 addons.go:231] Setting addon helm-tiller=true in "addons-276457"
	I1025 21:12:00.395897   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.397479   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.397766   19225 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-276457"
	I1025 21:12:00.397784   19225 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-276457"
	I1025 21:12:00.397825   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.397892   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.397923   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.395888   19225 addons.go:69] Setting storage-provisioner=true in profile "addons-276457"
	I1025 21:12:00.398553   19225 addons.go:231] Setting addon storage-provisioner=true in "addons-276457"
	I1025 21:12:00.398598   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.399045   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.395905   19225 addons.go:69] Setting gcp-auth=true in profile "addons-276457"
	I1025 21:12:00.401401   19225 mustload.go:65] Loading cluster: addons-276457
	I1025 21:12:00.395904   19225 addons.go:69] Setting metrics-server=true in profile "addons-276457"
	I1025 21:12:00.402245   19225 addons.go:231] Setting addon metrics-server=true in "addons-276457"
	I1025 21:12:00.402326   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.432480   19225 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1025 21:12:00.435031   19225 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1025 21:12:00.435047   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 21:12:00.434729   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.435267   19225 config.go:182] Loaded profile config "addons-276457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:12:00.435444   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.436995   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 21:12:00.434893   19225 addons.go:231] Setting addon default-storageclass=true in "addons-276457"
	I1025 21:12:00.437039   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.438406   19225 out.go:177]   - Using image docker.io/registry:2.8.3
	I1025 21:12:00.435184   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.434848   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.437855   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.440459   19225 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-276457"
	I1025 21:12:00.440934   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.441412   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.447422   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 21:12:00.446005   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 21:12:00.450233   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 21:12:00.448937   19225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1025 21:12:00.449052   19225 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1025 21:12:00.452934   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 21:12:00.452951   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 21:12:00.453007   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.454670   19225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 21:12:00.454897   19225 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 21:12:00.454710   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 21:12:00.456011   19225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 21:12:00.456026   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1025 21:12:00.457362   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.457420   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 21:12:00.459429   19225 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1025 21:12:00.459370   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 21:12:00.460788   19225 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 21:12:00.460804   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1025 21:12:00.459641   19225 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 21:12:00.460845   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1025 21:12:00.460851   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.460890   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.462212   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 21:12:00.463454   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 21:12:00.464688   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 21:12:00.464704   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 21:12:00.464749   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.465693   19225 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-276457" context rescaled to 1 replicas
	I1025 21:12:00.465729   19225 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:12:00.467306   19225 out.go:177] * Verifying Kubernetes components...
	I1025 21:12:00.468597   19225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:12:00.475687   19225 out.go:177]   - Using image docker.io/busybox:stable
	I1025 21:12:00.476934   19225 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 21:12:00.478408   19225 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 21:12:00.478431   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 21:12:00.478487   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.483751   19225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:12:00.484991   19225 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:12:00.485012   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 21:12:00.485081   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.491119   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.493756   19225 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1025 21:12:00.496470   19225 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 21:12:00.496492   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 21:12:00.496548   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.497415   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.500339   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.504662   19225 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1025 21:12:00.506154   19225 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1025 21:12:00.506171   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1025 21:12:00.506218   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.507748   19225 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1025 21:12:00.508955   19225 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1025 21:12:00.508977   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1025 21:12:00.509026   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.517543   19225 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 21:12:00.517565   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 21:12:00.517613   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.524029   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.524112   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.527571   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.539375   19225 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1025 21:12:00.540885   19225 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 21:12:00.540901   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 21:12:00.540958   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.547839   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.548187   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.549497   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.568871   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.571534   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.573984   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.579725   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.580290   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.627625   19225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1025 21:12:00.628541   19225 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 21:12:00.628561   19225 retry.go:31] will retry after 369.922544ms: ssh: handshake failed: EOF
	I1025 21:12:00.628667   19225 node_ready.go:35] waiting up to 6m0s for node "addons-276457" to be "Ready" ...
	I1025 21:12:00.746895   19225 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 21:12:00.746917   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 21:12:00.756838   19225 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 21:12:00.756865   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 21:12:00.831475   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 21:12:00.926790   19225 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 21:12:00.926859   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 21:12:00.927327   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 21:12:00.930171   19225 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1025 21:12:00.930195   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1025 21:12:00.932619   19225 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 21:12:00.932684   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 21:12:01.045335   19225 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 21:12:01.045418   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 21:12:01.048084   19225 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1025 21:12:01.048150   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1025 21:12:01.133694   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 21:12:01.138100   19225 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 21:12:01.138165   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 21:12:01.140808   19225 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1025 21:12:01.140866   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1025 21:12:01.144189   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 21:12:01.229479   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 21:12:01.229556   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 21:12:01.238644   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 21:12:01.244145   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 21:12:01.329121   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:12:01.335466   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 21:12:01.340002   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 21:12:01.340077   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 21:12:01.426904   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1025 21:12:01.427108   19225 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 21:12:01.427175   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 21:12:01.527964   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 21:12:01.641601   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 21:12:01.641695   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 21:12:01.644193   19225 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1025 21:12:01.644260   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1025 21:12:01.838739   19225 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 21:12:01.838833   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 21:12:02.127809   19225 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1025 21:12:02.127891   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1025 21:12:02.228531   19225 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.600862843s)
	I1025 21:12:02.228571   19225 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 21:12:02.343596   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 21:12:02.343636   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 21:12:02.446386   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 21:12:02.446413   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 21:12:02.743473   19225 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1025 21:12:02.743551   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1025 21:12:02.831704   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:02.838093   19225 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:12:02.838159   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 21:12:02.934332   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 21:12:02.934430   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 21:12:03.032851   19225 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1025 21:12:03.032929   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1025 21:12:03.127754   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 21:12:03.127833   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 21:12:03.144481   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:12:03.335639   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 21:12:03.335666   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 21:12:03.449186   19225 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 21:12:03.449212   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1025 21:12:03.728207   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 21:12:03.728238   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 21:12:03.744894   19225 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1025 21:12:03.744923   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1025 21:12:03.842250   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 21:12:03.842339   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 21:12:03.940453   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1025 21:12:03.947265   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 21:12:03.947293   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 21:12:04.227021   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 21:12:04.645251   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.813677657s)
	I1025 21:12:04.838886   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:05.033841   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.106481506s)
	I1025 21:12:05.826832   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.693038034s)
	I1025 21:12:06.552033   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.407809737s)
	I1025 21:12:06.552068   19225 addons.go:467] Verifying addon ingress=true in "addons-276457"
	I1025 21:12:06.552121   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.313388569s)
	I1025 21:12:06.552152   19225 addons.go:467] Verifying addon registry=true in "addons-276457"
	I1025 21:12:06.552185   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.307961425s)
	I1025 21:12:06.554496   19225 out.go:177] * Verifying registry addon...
	I1025 21:12:06.552208   19225 addons.go:467] Verifying addon metrics-server=true in "addons-276457"
	I1025 21:12:06.552293   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.223125195s)
	I1025 21:12:06.552342   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.216791649s)
	I1025 21:12:06.552398   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.125463299s)
	I1025 21:12:06.552445   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.024388063s)
	I1025 21:12:06.552544   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.4079754s)
	I1025 21:12:06.556995   19225 out.go:177] * Verifying ingress addon...
	W1025 21:12:06.557024   19225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 21:12:06.558415   19225 retry.go:31] will retry after 236.202102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 21:12:06.552619   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.612130669s)
	I1025 21:12:06.556451   19225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 21:12:06.559042   19225 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 21:12:06.562546   19225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 21:12:06.562560   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:06.562757   19225 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 21:12:06.562773   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:06.629856   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:06.630115   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:06.795511   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:12:07.133505   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:07.133562   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:07.154902   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:07.332150   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.105069614s)
	I1025 21:12:07.332194   19225 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-276457"
	I1025 21:12:07.334199   19225 out.go:177] * Verifying csi-hostpath-driver addon...
	I1025 21:12:07.337237   19225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 21:12:07.338505   19225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 21:12:07.338610   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:07.340573   19225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 21:12:07.340593   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:07.343839   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:07.359189   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:07.462171   19225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 21:12:07.536333   19225 addons.go:231] Setting addon gcp-auth=true in "addons-276457"
	I1025 21:12:07.536475   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:07.536998   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:07.558237   19225 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 21:12:07.558298   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:07.574134   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:07.635647   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:07.635743   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:07.849138   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:08.135503   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:08.136265   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:08.348558   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:08.643043   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:08.644063   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:08.848897   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:08.950776   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.155222699s)
	I1025 21:12:08.950849   19225 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.392581165s)
	I1025 21:12:08.953078   19225 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1025 21:12:08.954883   19225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 21:12:08.956500   19225 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 21:12:08.956517   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 21:12:09.038689   19225 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 21:12:09.038756   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 21:12:09.126917   19225 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 21:12:09.126943   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1025 21:12:09.133985   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:09.134839   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:09.146321   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 21:12:09.156076   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:09.349034   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:09.636106   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:09.637148   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:09.849302   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:10.134581   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:10.134790   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:10.349113   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:10.635781   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:10.636129   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:10.734659   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.588298206s)
	I1025 21:12:10.735581   19225 addons.go:467] Verifying addon gcp-auth=true in "addons-276457"
	I1025 21:12:10.737415   19225 out.go:177] * Verifying gcp-auth addon...
	I1025 21:12:10.739887   19225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 21:12:10.742617   19225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 21:12:10.742641   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:10.746634   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:10.848472   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:11.134214   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:11.134439   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:11.250014   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:11.349308   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:11.633104   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:11.633372   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:11.654357   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:11.750480   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:11.847991   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:12.134116   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:12.134718   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:12.250547   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:12.347886   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:12.634011   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:12.634022   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:12.750390   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:12.848027   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:13.134037   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:13.135120   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:13.249836   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:13.348338   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:13.633424   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:13.633569   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:13.654530   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:13.750062   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:13.848573   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:14.133495   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:14.133734   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:14.250067   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:14.347553   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:14.633526   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:14.633911   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:14.749905   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:14.848159   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:15.133262   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:15.133613   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:15.249823   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:15.348321   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:15.633403   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:15.633497   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:15.749779   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:15.847519   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:16.133833   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:16.134029   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:16.155002   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:16.249601   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:16.348006   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:16.633201   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:16.633259   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:16.749963   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:16.848115   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:17.133480   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:17.133721   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:17.249999   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:17.348522   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:17.633962   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:17.634577   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:17.749563   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:17.847841   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:18.133895   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:18.134120   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:18.249430   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:18.347867   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:18.633578   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:18.633682   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:18.654849   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:18.750350   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:18.847622   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:19.133914   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:19.134098   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:19.249857   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:19.348229   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:19.633255   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:19.633484   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:19.749661   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:19.847690   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:20.133187   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:20.133336   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:20.250493   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:20.347640   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:20.634037   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:20.634542   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:20.749285   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:20.847646   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:21.134352   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:21.134383   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:21.154674   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:21.250185   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:21.347526   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:21.633300   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:21.633436   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:21.750084   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:21.847267   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:22.133632   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:22.133720   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:22.249521   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:22.348503   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:22.633486   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:22.633565   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:22.750129   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:22.847255   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:23.133307   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:23.133853   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:23.250161   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:23.347758   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:23.633860   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:23.633911   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:23.655242   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:23.749703   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:23.848058   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:24.133318   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:24.133604   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:24.249936   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:24.348051   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:24.632996   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:24.633416   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:24.749833   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:24.848028   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:25.133303   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:25.133529   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:25.249937   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:25.348354   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:25.633186   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:25.633240   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:25.749663   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:25.848596   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:26.133782   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:26.134110   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:26.155001   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:26.249392   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:26.347872   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:26.633678   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:26.633841   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:26.750213   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:26.847257   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:27.133319   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:27.133423   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:27.249993   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:27.348705   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:27.633509   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:27.633724   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:27.750096   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:27.848561   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:28.133628   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:28.133897   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:28.250361   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:28.347652   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:28.633382   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:28.633588   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:28.654595   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:28.750115   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:28.847408   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:29.134037   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:29.134371   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:29.250029   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:29.348451   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:29.633813   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:29.634024   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:29.749236   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:29.850405   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:30.133779   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:30.133920   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:30.249656   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:30.348369   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:30.633305   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:30.633481   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:30.749818   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:30.848216   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:31.133198   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:31.133537   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:31.154250   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:31.249863   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:31.348328   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:31.633382   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:31.633517   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:31.750016   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:31.848287   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:32.133307   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:32.133500   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:32.250000   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:32.348391   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:32.633229   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:32.633409   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:32.750150   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:32.847378   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:33.136568   19225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 21:12:33.136595   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:33.138842   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:33.155222   19225 node_ready.go:49] node "addons-276457" has status "Ready":"True"
	I1025 21:12:33.155260   19225 node_ready.go:38] duration metric: took 32.526557864s waiting for node "addons-276457" to be "Ready" ...
	I1025 21:12:33.155273   19225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:12:33.165439   19225 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sf5h2" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:33.250601   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:33.351440   19225 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 21:12:33.351468   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:33.633866   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:33.634046   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:33.749793   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:33.849995   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:34.134141   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:34.134192   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:34.249567   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:34.348955   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:34.633634   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:34.633743   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:34.681949   19225 pod_ready.go:92] pod "coredns-5dd5756b68-sf5h2" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:34.681977   19225 pod_ready.go:81] duration metric: took 1.516502695s waiting for pod "coredns-5dd5756b68-sf5h2" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.681995   19225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.686497   19225 pod_ready.go:92] pod "etcd-addons-276457" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:34.686561   19225 pod_ready.go:81] duration metric: took 4.558807ms waiting for pod "etcd-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.686578   19225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.690861   19225 pod_ready.go:92] pod "kube-apiserver-addons-276457" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:34.690878   19225 pod_ready.go:81] duration metric: took 4.293041ms waiting for pod "kube-apiserver-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.690887   19225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.750234   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:34.755054   19225 pod_ready.go:92] pod "kube-controller-manager-addons-276457" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:34.755073   19225 pod_ready.go:81] duration metric: took 64.179742ms waiting for pod "kube-controller-manager-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.755084   19225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lfxtf" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.849182   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:35.133762   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:35.133880   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:35.155334   19225 pod_ready.go:92] pod "kube-proxy-lfxtf" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:35.155355   19225 pod_ready.go:81] duration metric: took 400.266104ms waiting for pod "kube-proxy-lfxtf" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:35.155363   19225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:35.250173   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:35.348328   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:35.555607   19225 pod_ready.go:92] pod "kube-scheduler-addons-276457" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:35.555629   19225 pod_ready.go:81] duration metric: took 400.259869ms waiting for pod "kube-scheduler-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:35.555638   19225 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:35.633867   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:35.634021   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:35.749463   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:35.848519   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:36.136054   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:36.136147   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:36.251745   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:36.350024   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:36.640651   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:36.641581   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:36.750219   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:36.850325   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:37.136086   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:37.136405   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:37.249808   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:37.349930   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:37.634741   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:37.636183   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:37.750147   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:37.849140   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:37.935890   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:38.135034   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:38.135446   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:38.250182   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:38.349542   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:38.634565   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:38.635108   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:38.751230   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:38.850001   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:39.134954   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:39.135602   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:39.250112   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:39.349854   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:39.634350   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:39.634398   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:39.749750   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:39.849451   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:40.136552   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:40.136798   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:40.250063   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:40.349840   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:40.434911   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:40.633852   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:40.634010   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:40.750557   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:40.849042   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:41.134037   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:41.134109   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:41.249623   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:41.350272   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:41.633973   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:41.634654   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:41.750924   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:41.849171   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:42.134533   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:42.134676   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:42.250431   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:42.348429   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:42.634154   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:42.634920   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:42.750072   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:42.849672   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:42.936003   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:43.134915   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:43.135288   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:43.250865   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:43.350031   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:43.647860   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:43.648426   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:43.750902   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:43.850167   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:44.135522   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:44.135812   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:44.250556   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:44.349860   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:44.634416   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:44.634515   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:44.750361   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:44.848913   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:45.136647   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:45.136709   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:45.250866   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:45.349404   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:45.434200   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:45.633680   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:45.633741   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:45.750468   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:45.849268   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:46.133983   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:46.133992   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:46.249565   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:46.349080   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:46.634276   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:46.634298   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:46.749980   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:46.850355   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:47.134834   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:47.135031   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:47.250194   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:47.349470   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:47.436254   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:47.637208   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:47.637248   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:47.749839   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:47.849034   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:48.134521   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:48.134646   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:48.249987   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:48.349741   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:48.634974   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:48.635642   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:48.750484   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:48.848832   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:49.133858   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:49.134330   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:49.250267   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:49.350544   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:49.634274   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:49.634446   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:49.750191   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:49.849645   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:49.936272   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:50.134475   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:50.134601   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:50.249941   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:50.350002   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:50.634471   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:50.634696   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:50.750295   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:50.848376   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:50.934411   19225 pod_ready.go:92] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:50.934431   19225 pod_ready.go:81] duration metric: took 15.378787487s waiting for pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:50.934440   19225 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6vcl4" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:50.938535   19225 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-6vcl4" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:50.938553   19225 pod_ready.go:81] duration metric: took 4.107301ms waiting for pod "nvidia-device-plugin-daemonset-6vcl4" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:50.938571   19225 pod_ready.go:38] duration metric: took 17.783282137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:12:50.938590   19225 api_server.go:52] waiting for apiserver process to appear ...
	I1025 21:12:50.938641   19225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:12:50.951019   19225 api_server.go:72] duration metric: took 50.485254845s to wait for apiserver process to appear ...
	I1025 21:12:50.951054   19225 api_server.go:88] waiting for apiserver healthz status ...
	I1025 21:12:50.951076   19225 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 21:12:50.955022   19225 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 21:12:50.955983   19225 api_server.go:141] control plane version: v1.28.3
	I1025 21:12:50.956003   19225 api_server.go:131] duration metric: took 4.943529ms to wait for apiserver health ...
	I1025 21:12:50.956011   19225 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 21:12:50.963925   19225 system_pods.go:59] 19 kube-system pods found
	I1025 21:12:50.963956   19225 system_pods.go:61] "coredns-5dd5756b68-sf5h2" [751ca8b7-0f96-4283-985e-466a5465488b] Running
	I1025 21:12:50.963961   19225 system_pods.go:61] "csi-hostpath-attacher-0" [db70516b-fb4f-4675-809f-c13a75b3520b] Running
	I1025 21:12:50.963970   19225 system_pods.go:61] "csi-hostpath-resizer-0" [4209b996-75ca-4014-8e18-94ac7624feb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 21:12:50.963979   19225 system_pods.go:61] "csi-hostpathplugin-lpvws" [dcd7bf3c-50b6-4316-af65-6502373843a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 21:12:50.963987   19225 system_pods.go:61] "etcd-addons-276457" [5b2496f0-24e4-4b6f-96a2-178f23708977] Running
	I1025 21:12:50.963992   19225 system_pods.go:61] "kindnet-gwvhf" [e43f73bf-ff00-4e2a-b7fd-04f1ea6e7525] Running
	I1025 21:12:50.964065   19225 system_pods.go:61] "kube-apiserver-addons-276457" [65afa9c8-ca8e-4c44-a32f-1e309066d3ba] Running
	I1025 21:12:50.964092   19225 system_pods.go:61] "kube-controller-manager-addons-276457" [072d240d-befd-44a9-a611-03a71d6b942d] Running
	I1025 21:12:50.964106   19225 system_pods.go:61] "kube-ingress-dns-minikube" [b61b20cf-d8fa-4d4d-bcba-1a241bd163c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 21:12:50.964115   19225 system_pods.go:61] "kube-proxy-lfxtf" [5788d071-c70e-423a-b0b3-f6a073dd9ac7] Running
	I1025 21:12:50.964123   19225 system_pods.go:61] "kube-scheduler-addons-276457" [47f839ab-49d7-48d7-956f-aa6420977e23] Running
	I1025 21:12:50.964128   19225 system_pods.go:61] "metrics-server-7c66d45ddc-npx6l" [2269dbab-85e9-49c1-a14c-dc3b4c9b6219] Running
	I1025 21:12:50.964134   19225 system_pods.go:61] "nvidia-device-plugin-daemonset-6vcl4" [a592e92f-1bee-4d45-b641-bcd64d215d00] Running
	I1025 21:12:50.964140   19225 system_pods.go:61] "registry-proxy-757b5" [0f632bc0-5dac-4262-9ef7-eefd90d3e1e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 21:12:50.964148   19225 system_pods.go:61] "registry-wzfbd" [2736623a-ce10-4cd0-9c1b-72b47c11791c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 21:12:50.964159   19225 system_pods.go:61] "snapshot-controller-58dbcc7b99-8w96w" [1a277b0b-61fc-4fd0-a8a8-9c0b6cf9a142] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:12:50.964168   19225 system_pods.go:61] "snapshot-controller-58dbcc7b99-z65gj" [28b16b26-5e12-461c-98a4-399698e38c7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:12:50.964174   19225 system_pods.go:61] "storage-provisioner" [828646cd-20b3-4a1c-a61e-3d317b516b4a] Running
	I1025 21:12:50.964182   19225 system_pods.go:61] "tiller-deploy-7b677967b9-n7rpr" [136d2d8d-36a3-4072-9f39-dc7708f0c429] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1025 21:12:50.964191   19225 system_pods.go:74] duration metric: took 8.174159ms to wait for pod list to return data ...
	I1025 21:12:50.964200   19225 default_sa.go:34] waiting for default service account to be created ...
	I1025 21:12:50.966207   19225 default_sa.go:45] found service account: "default"
	I1025 21:12:50.966227   19225 default_sa.go:55] duration metric: took 2.020586ms for default service account to be created ...
	I1025 21:12:50.966235   19225 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 21:12:50.973622   19225 system_pods.go:86] 19 kube-system pods found
	I1025 21:12:50.973648   19225 system_pods.go:89] "coredns-5dd5756b68-sf5h2" [751ca8b7-0f96-4283-985e-466a5465488b] Running
	I1025 21:12:50.973654   19225 system_pods.go:89] "csi-hostpath-attacher-0" [db70516b-fb4f-4675-809f-c13a75b3520b] Running
	I1025 21:12:50.973662   19225 system_pods.go:89] "csi-hostpath-resizer-0" [4209b996-75ca-4014-8e18-94ac7624feb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 21:12:50.973672   19225 system_pods.go:89] "csi-hostpathplugin-lpvws" [dcd7bf3c-50b6-4316-af65-6502373843a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 21:12:50.973677   19225 system_pods.go:89] "etcd-addons-276457" [5b2496f0-24e4-4b6f-96a2-178f23708977] Running
	I1025 21:12:50.973681   19225 system_pods.go:89] "kindnet-gwvhf" [e43f73bf-ff00-4e2a-b7fd-04f1ea6e7525] Running
	I1025 21:12:50.973686   19225 system_pods.go:89] "kube-apiserver-addons-276457" [65afa9c8-ca8e-4c44-a32f-1e309066d3ba] Running
	I1025 21:12:50.973691   19225 system_pods.go:89] "kube-controller-manager-addons-276457" [072d240d-befd-44a9-a611-03a71d6b942d] Running
	I1025 21:12:50.973697   19225 system_pods.go:89] "kube-ingress-dns-minikube" [b61b20cf-d8fa-4d4d-bcba-1a241bd163c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 21:12:50.973704   19225 system_pods.go:89] "kube-proxy-lfxtf" [5788d071-c70e-423a-b0b3-f6a073dd9ac7] Running
	I1025 21:12:50.973709   19225 system_pods.go:89] "kube-scheduler-addons-276457" [47f839ab-49d7-48d7-956f-aa6420977e23] Running
	I1025 21:12:50.973713   19225 system_pods.go:89] "metrics-server-7c66d45ddc-npx6l" [2269dbab-85e9-49c1-a14c-dc3b4c9b6219] Running
	I1025 21:12:50.973718   19225 system_pods.go:89] "nvidia-device-plugin-daemonset-6vcl4" [a592e92f-1bee-4d45-b641-bcd64d215d00] Running
	I1025 21:12:50.973723   19225 system_pods.go:89] "registry-proxy-757b5" [0f632bc0-5dac-4262-9ef7-eefd90d3e1e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 21:12:50.973735   19225 system_pods.go:89] "registry-wzfbd" [2736623a-ce10-4cd0-9c1b-72b47c11791c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 21:12:50.973742   19225 system_pods.go:89] "snapshot-controller-58dbcc7b99-8w96w" [1a277b0b-61fc-4fd0-a8a8-9c0b6cf9a142] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:12:50.973749   19225 system_pods.go:89] "snapshot-controller-58dbcc7b99-z65gj" [28b16b26-5e12-461c-98a4-399698e38c7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:12:50.973756   19225 system_pods.go:89] "storage-provisioner" [828646cd-20b3-4a1c-a61e-3d317b516b4a] Running
	I1025 21:12:50.973762   19225 system_pods.go:89] "tiller-deploy-7b677967b9-n7rpr" [136d2d8d-36a3-4072-9f39-dc7708f0c429] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1025 21:12:50.973770   19225 system_pods.go:126] duration metric: took 7.531067ms to wait for k8s-apps to be running ...
	I1025 21:12:50.973779   19225 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 21:12:50.973821   19225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:12:50.984625   19225 system_svc.go:56] duration metric: took 10.835646ms WaitForService to wait for kubelet.
	I1025 21:12:50.984651   19225 kubeadm.go:581] duration metric: took 50.518893947s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 21:12:50.984678   19225 node_conditions.go:102] verifying NodePressure condition ...
	I1025 21:12:50.987538   19225 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 21:12:50.987573   19225 node_conditions.go:123] node cpu capacity is 8
	I1025 21:12:50.987587   19225 node_conditions.go:105] duration metric: took 2.90305ms to run NodePressure ...
	I1025 21:12:50.987601   19225 start.go:228] waiting for startup goroutines ...
	I1025 21:12:51.134348   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:51.134420   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:51.250578   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:51.350825   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:51.641527   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:51.643600   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:51.827184   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:51.850074   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:52.135372   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:52.135598   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:52.250611   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:52.350219   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:52.634793   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:52.634880   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:52.750682   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:52.849285   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:53.135535   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:53.135761   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:53.251151   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:53.350970   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:53.633772   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:53.634095   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:53.750044   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:53.849575   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:54.134227   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:54.134259   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:54.250697   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:54.349954   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:54.634116   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:54.634800   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:54.750731   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:54.849226   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:55.134382   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:55.134821   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:55.250702   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:55.349610   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:55.635099   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:55.635892   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:55.752105   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:55.849910   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:56.135193   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:56.136578   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:56.249515   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:56.348805   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:56.636163   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:56.636171   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:56.750372   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:56.858577   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:57.133846   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:57.133901   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:57.250624   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:57.349825   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:57.634054   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:57.634268   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:57.750356   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:57.849785   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:58.134210   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:58.134312   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:58.249762   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:58.349390   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:58.634053   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:58.634638   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:58.749932   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:58.849164   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:59.134264   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:59.134714   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:59.250509   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:59.349040   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:59.633877   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:59.634052   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:59.750263   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:59.848109   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:00.136330   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:00.136400   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:00.250205   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:00.349037   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:00.634240   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:00.634940   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:00.750658   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:00.849402   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:01.134929   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:01.135128   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:01.250796   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:01.349810   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:01.634932   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:01.635428   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:01.750764   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:01.848618   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:02.134252   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:02.134462   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:02.249786   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:02.349987   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:02.633929   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:02.634188   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:02.750947   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:02.849460   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:03.134994   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:03.136581   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:03.250332   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:03.348732   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:03.633548   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:03.633583   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:03.750165   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:03.848952   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:04.134429   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:04.134460   19225 kapi.go:107] duration metric: took 57.578005009s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 21:13:04.250425   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:04.348711   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:04.634488   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:04.749992   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:04.849381   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:05.135146   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:05.250850   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:05.350203   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:05.636054   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:05.753082   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:05.850702   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:06.135559   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:06.251082   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:06.349660   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:06.635199   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:06.750332   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:06.849749   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:07.134385   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:07.250749   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:07.349663   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:07.634142   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:07.750682   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:07.849129   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:08.134030   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:08.250212   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:08.349372   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:08.634080   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:08.750203   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:08.850240   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:09.134241   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:09.251748   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:09.349109   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:09.633582   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:09.749839   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:09.849610   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:10.133671   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:10.317615   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:10.356598   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:10.635467   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:10.750853   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:10.849738   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:11.134697   19225 kapi.go:107] duration metric: took 1m4.575650465s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 21:13:11.249992   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:11.349681   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:11.749744   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:11.853457   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:12.250084   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:12.349725   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:12.750151   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:12.849190   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:13.250791   19225 kapi.go:107] duration metric: took 1m2.510904458s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 21:13:13.253165   19225 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-276457 cluster.
	I1025 21:13:13.254819   19225 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 21:13:13.256443   19225 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 21:13:13.349691   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:13.848395   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:14.349466   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:14.848436   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:15.349192   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:15.862237   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:16.353858   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:16.850253   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:17.349194   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:17.849011   19225 kapi.go:107] duration metric: took 1m10.511772782s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 21:13:17.851069   19225 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner-rancher, metrics-server, helm-tiller, storage-provisioner, nvidia-device-plugin, inspektor-gadget, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1025 21:13:17.852604   19225 addons.go:502] enable addons completed in 1m17.456879709s: enabled=[cloud-spanner ingress-dns storage-provisioner-rancher metrics-server helm-tiller storage-provisioner nvidia-device-plugin inspektor-gadget default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1025 21:13:17.852652   19225 start.go:233] waiting for cluster config update ...
	I1025 21:13:17.852669   19225 start.go:242] writing updated cluster config ...
	I1025 21:13:17.852907   19225 ssh_runner.go:195] Run: rm -f paused
	I1025 21:13:17.899399   19225 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1025 21:13:17.901088   19225 out.go:177] * Done! kubectl is now configured to use "addons-276457" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 25 21:15:54 addons-276457 crio[947]: time="2023-10-25 21:15:54.256012860Z" level=info msg="Removing container: fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d" id=d0b81c69-5089-4d6e-9788-ed04ff251774 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 21:15:54 addons-276457 crio[947]: time="2023-10-25 21:15:54.271980782Z" level=info msg="Removed container fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=d0b81c69-5089-4d6e-9788-ed04ff251774 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 21:15:54 addons-276457 crio[947]: time="2023-10-25 21:15:54.678812850Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6" id=7553f0a9-4b5b-4617-afb5-e92399b4afb3 name=/runtime.v1.ImageService/PullImage
	Oct 25 21:15:54 addons-276457 crio[947]: time="2023-10-25 21:15:54.679640306Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=a74e4a33-c3ef-4519-9cca-c553bdd87647 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 21:15:54 addons-276457 crio[947]: time="2023-10-25 21:15:54.680540443Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=a74e4a33-c3ef-4519-9cca-c553bdd87647 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 21:15:54 addons-276457 crio[947]: time="2023-10-25 21:15:54.681297250Z" level=info msg="Creating container: default/hello-world-app-5d77478584-ttctj/hello-world-app" id=579cfb02-addd-4a7a-b907-faada181bf75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 21:15:54 addons-276457 crio[947]: time="2023-10-25 21:15:54.681387728Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 25 21:15:54 addons-276457 crio[947]: time="2023-10-25 21:15:54.759477258Z" level=info msg="Created container dc1d4e788e12d4c64a63b01c6e611cb49ff9df0d98a841da1c79443d74cc921b: default/hello-world-app-5d77478584-ttctj/hello-world-app" id=579cfb02-addd-4a7a-b907-faada181bf75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 21:15:54 addons-276457 crio[947]: time="2023-10-25 21:15:54.760044049Z" level=info msg="Starting container: dc1d4e788e12d4c64a63b01c6e611cb49ff9df0d98a841da1c79443d74cc921b" id=d9678b48-2378-404f-a4c3-72169d05d04d name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 21:15:54 addons-276457 crio[947]: time="2023-10-25 21:15:54.768770510Z" level=info msg="Started container" PID=10429 containerID=dc1d4e788e12d4c64a63b01c6e611cb49ff9df0d98a841da1c79443d74cc921b description=default/hello-world-app-5d77478584-ttctj/hello-world-app id=d9678b48-2378-404f-a4c3-72169d05d04d name=/runtime.v1.RuntimeService/StartContainer sandboxID=d83bb02e4bc94e49d6a999d404198912712b3495bfaef92da8c9b315385f4753
	Oct 25 21:15:56 addons-276457 crio[947]: time="2023-10-25 21:15:56.090131235Z" level=info msg="Stopping container: f802f74ec158f2b426b16f988d765ca29a8a7778152f8d062b4fe080300ab823 (timeout: 2s)" id=953adfae-9b33-4c00-b7e4-c9005abf91cc name=/runtime.v1.RuntimeService/StopContainer
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.097655050Z" level=warning msg="Stopping container f802f74ec158f2b426b16f988d765ca29a8a7778152f8d062b4fe080300ab823 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=953adfae-9b33-4c00-b7e4-c9005abf91cc name=/runtime.v1.RuntimeService/StopContainer
	Oct 25 21:15:58 addons-276457 conmon[5899]: conmon f802f74ec158f2b426b1 <ninfo>: container 5911 exited with status 137
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.239874748Z" level=info msg="Stopped container f802f74ec158f2b426b16f988d765ca29a8a7778152f8d062b4fe080300ab823: ingress-nginx/ingress-nginx-controller-6f48fc54bd-gcj4s/controller" id=953adfae-9b33-4c00-b7e4-c9005abf91cc name=/runtime.v1.RuntimeService/StopContainer
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.240391311Z" level=info msg="Stopping pod sandbox: ea080af89b41b31aff69adfc33771b543090adeb881e8724ae2bfc5ee2a7c231" id=5df207fe-8b25-4723-aff2-110f7dc222a9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.243459844Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-XJ566HNRML7ZKTL6 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-I2WWHX25PBCZEEPB - [0:0]\n-X KUBE-HP-I2WWHX25PBCZEEPB\n-X KUBE-HP-XJ566HNRML7ZKTL6\nCOMMIT\n"
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.244737800Z" level=info msg="Closing host port tcp:80"
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.244770374Z" level=info msg="Closing host port tcp:443"
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.246013136Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.246030912Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.246144764Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-6f48fc54bd-gcj4s Namespace:ingress-nginx ID:ea080af89b41b31aff69adfc33771b543090adeb881e8724ae2bfc5ee2a7c231 UID:98bd1c83-3689-4997-9715-385d3f107b22 NetNS:/var/run/netns/35f0deca-a0f1-44f6-856b-4077d6ffa665 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.246248560Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-6f48fc54bd-gcj4s from CNI network \"kindnet\" (type=ptp)"
	Oct 25 21:15:58 addons-276457 crio[947]: time="2023-10-25 21:15:58.271447137Z" level=info msg="Stopped pod sandbox: ea080af89b41b31aff69adfc33771b543090adeb881e8724ae2bfc5ee2a7c231" id=5df207fe-8b25-4723-aff2-110f7dc222a9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 21:15:59 addons-276457 crio[947]: time="2023-10-25 21:15:59.269781868Z" level=info msg="Removing container: f802f74ec158f2b426b16f988d765ca29a8a7778152f8d062b4fe080300ab823" id=1037fdd8-14b6-432a-9531-076198eb6605 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 21:15:59 addons-276457 crio[947]: time="2023-10-25 21:15:59.284298921Z" level=info msg="Removed container f802f74ec158f2b426b16f988d765ca29a8a7778152f8d062b4fe080300ab823: ingress-nginx/ingress-nginx-controller-6f48fc54bd-gcj4s/controller" id=1037fdd8-14b6-432a-9531-076198eb6605 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dc1d4e788e12d       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6                      8 seconds ago       Running             hello-world-app           0                   d83bb02e4bc94       hello-world-app-5d77478584-ttctj
	8876bc82bf0e0       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   1c2e0300f65af       nginx
	7cba49f7404b6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   1c9817f50ebdf       gcp-auth-d4c87556c-5hmwp
	f20e415c99ea2       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     2                   7f83cd33faf88       ingress-nginx-admission-patch-c27zh
	2452485fa2fc8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   7d1fe410a46b0       ingress-nginx-admission-create-dp2tl
	75ff7ad1772cd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   096b2a366d270       coredns-5dd5756b68-sf5h2
	992887bfba631       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   d90eb0b2bfef0       storage-provisioner
	ce84af5567968       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                             4 minutes ago       Running             kube-proxy                0                   087f8ba5f918a       kube-proxy-lfxtf
	5f7b376083c55       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   273acfe28a6d2       kindnet-gwvhf
	03b775e92e4fb       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                             4 minutes ago       Running             kube-scheduler            0                   47de2bfcdfc83       kube-scheduler-addons-276457
	7b23088978803       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   7a0aa173b6609       etcd-addons-276457
	7bd55dfbf9f63       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                             4 minutes ago       Running             kube-apiserver            0                   424fae2d2fca5       kube-apiserver-addons-276457
	375b113702a15       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                             4 minutes ago       Running             kube-controller-manager   0                   fe8d7772b58ac       kube-controller-manager-addons-276457
	
	* 
	* ==> coredns [75ff7ad1772cd4cc641c27d81dc0ba3f5ae883af114601ed60edd0fe5e91f539] <==
	* [INFO] 10.244.0.16:35861 - 50476 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110623s
	[INFO] 10.244.0.16:41819 - 14105 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005211734s
	[INFO] 10.244.0.16:41819 - 9734 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005304659s
	[INFO] 10.244.0.16:43957 - 59882 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003654702s
	[INFO] 10.244.0.16:43957 - 44015 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004749289s
	[INFO] 10.244.0.16:44540 - 61998 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005153616s
	[INFO] 10.244.0.16:44540 - 63019 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005239373s
	[INFO] 10.244.0.16:41996 - 36278 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000060207s
	[INFO] 10.244.0.16:41996 - 24754 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098313s
	[INFO] 10.244.0.20:39156 - 35120 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000171297s
	[INFO] 10.244.0.20:40390 - 17221 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000150397s
	[INFO] 10.244.0.20:40135 - 52991 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144489s
	[INFO] 10.244.0.20:52295 - 8782 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155773s
	[INFO] 10.244.0.20:56044 - 57441 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096856s
	[INFO] 10.244.0.20:49017 - 23362 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117587s
	[INFO] 10.244.0.20:44022 - 8593 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006916256s
	[INFO] 10.244.0.20:48756 - 35047 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007296825s
	[INFO] 10.244.0.20:45314 - 57066 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006592822s
	[INFO] 10.244.0.20:45101 - 27190 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007298437s
	[INFO] 10.244.0.20:49742 - 1250 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00432022s
	[INFO] 10.244.0.20:56914 - 26333 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004911244s
	[INFO] 10.244.0.20:52891 - 56091 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000660891s
	[INFO] 10.244.0.20:56358 - 7017 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000656753s
	[INFO] 10.244.0.22:59087 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000109206s
	[INFO] 10.244.0.22:48971 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000070826s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-276457
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-276457
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=addons-276457
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T21_11_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-276457
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:11:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-276457
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 21:16:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:14:51 +0000   Wed, 25 Oct 2023 21:11:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:14:51 +0000   Wed, 25 Oct 2023 21:11:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:14:51 +0000   Wed, 25 Oct 2023 21:11:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 21:14:51 +0000   Wed, 25 Oct 2023 21:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-276457
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c2fc0dc393648b7a68521daff511eb3
	  System UUID:                0971f13f-3e61-4c7b-bfb8-1801c7f8cab3
	  Boot ID:                    34092eb3-c5c2-47c9-ae8c-38e7a764813a
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-ttctj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-d4c87556c-5hmwp                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 coredns-5dd5756b68-sf5h2                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m3s
	  kube-system                 etcd-addons-276457                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kindnet-gwvhf                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-addons-276457             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-controller-manager-addons-276457    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-proxy-lfxtf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-addons-276457             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m58s  kube-proxy       
	  Normal  Starting                 4m16s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m16s  kubelet          Node addons-276457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s  kubelet          Node addons-276457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s  kubelet          Node addons-276457 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m4s   node-controller  Node addons-276457 event: Registered Node addons-276457 in Controller
	  Normal  NodeReady                3m30s  kubelet          Node addons-276457 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007686] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003094] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000681] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000644] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000681] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000619] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000652] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.894003] kauditd_printk_skb: 36 callbacks suppressed
	[Oct25 21:13] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 89 e0 fc dd 0b 4e 55 b8 5a de 54 08 00
	[  +1.015767] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 89 e0 fc dd 0b 4e 55 b8 5a de 54 08 00
	[  +2.015766] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: a2 89 e0 fc dd 0b 4e 55 b8 5a de 54 08 00
	[  +4.031569] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 89 e0 fc dd 0b 4e 55 b8 5a de 54 08 00
	[  +8.191221] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: a2 89 e0 fc dd 0b 4e 55 b8 5a de 54 08 00
	[Oct25 21:14] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 89 e0 fc dd 0b 4e 55 b8 5a de 54 08 00
	[ +33.532590] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: a2 89 e0 fc dd 0b 4e 55 b8 5a de 54 08 00
	
	* 
	* ==> etcd [7b230889788035517ae74d3b3c3ee09dd49a3de32493db3e17276fbbc8f68a57] <==
	* {"level":"warn","ts":"2023-10-25T21:12:03.832116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.897377ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-276457\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-10-25T21:12:03.832152Z","caller":"traceutil/trace.go:171","msg":"trace[1538852331] range","detail":"{range_begin:/registry/minions/addons-276457; range_end:; response_count:1; response_revision:388; }","duration":"103.944379ms","start":"2023-10-25T21:12:03.728199Z","end":"2023-10-25T21:12:03.832143Z","steps":["trace[1538852331] 'agreement among raft nodes before linearized reading'  (duration: 103.848859ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:03.83232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.158376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-276457\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-10-25T21:12:03.832352Z","caller":"traceutil/trace.go:171","msg":"trace[1603167304] range","detail":"{range_begin:/registry/minions/addons-276457; range_end:; response_count:1; response_revision:388; }","duration":"104.193129ms","start":"2023-10-25T21:12:03.728152Z","end":"2023-10-25T21:12:03.832345Z","steps":["trace[1603167304] 'agreement among raft nodes before linearized reading'  (duration: 104.139351ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:03.832504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.623935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-25T21:12:03.832537Z","caller":"traceutil/trace.go:171","msg":"trace[1069362955] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:388; }","duration":"299.65618ms","start":"2023-10-25T21:12:03.532871Z","end":"2023-10-25T21:12:03.832528Z","steps":["trace[1069362955] 'agreement among raft nodes before linearized reading'  (duration: 299.609163ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:03.832659Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.909526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2023-10-25T21:12:03.832688Z","caller":"traceutil/trace.go:171","msg":"trace[779930748] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:388; }","duration":"299.937779ms","start":"2023-10-25T21:12:03.532743Z","end":"2023-10-25T21:12:03.832681Z","steps":["trace[779930748] 'agreement among raft nodes before linearized reading'  (duration: 299.891423ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:03.832808Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.130005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-25T21:12:03.832838Z","caller":"traceutil/trace.go:171","msg":"trace[1487520939] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:388; }","duration":"300.169876ms","start":"2023-10-25T21:12:03.532662Z","end":"2023-10-25T21:12:03.832832Z","steps":["trace[1487520939] 'agreement among raft nodes before linearized reading'  (duration: 300.116771ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:03.832897Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-25T21:12:03.532656Z","time spent":"300.232352ms","remote":"127.0.0.1:49518","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" "}
	{"level":"info","ts":"2023-10-25T21:12:04.038422Z","caller":"traceutil/trace.go:171","msg":"trace[2033627025] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"103.251871ms","start":"2023-10-25T21:12:03.935148Z","end":"2023-10-25T21:12:04.0384Z","steps":["trace[2033627025] 'process raft request'  (duration: 103.010221ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:04.132223Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.60448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"warn","ts":"2023-10-25T21:12:04.137133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.095411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-25T21:12:04.137205Z","caller":"traceutil/trace.go:171","msg":"trace[1774568463] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:391; }","duration":"199.176653ms","start":"2023-10-25T21:12:03.938013Z","end":"2023-10-25T21:12:04.13719Z","steps":["trace[1774568463] 'agreement among raft nodes before linearized reading'  (duration: 100.832156ms)","trace[1774568463] 'range keys from in-memory index tree'  (duration: 98.244259ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-25T21:12:04.137589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.936032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-10-25T21:12:04.137642Z","caller":"traceutil/trace.go:171","msg":"trace[1635521950] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:391; }","duration":"199.993119ms","start":"2023-10-25T21:12:03.937639Z","end":"2023-10-25T21:12:04.137632Z","steps":["trace[1635521950] 'agreement among raft nodes before linearized reading'  (duration: 199.895616ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:12:04.13244Z","caller":"traceutil/trace.go:171","msg":"trace[1563126748] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:391; }","duration":"197.818422ms","start":"2023-10-25T21:12:03.934578Z","end":"2023-10-25T21:12:04.132396Z","steps":["trace[1563126748] 'agreement among raft nodes before linearized reading'  (duration: 104.151294ms)","trace[1563126748] 'range keys from in-memory index tree'  (duration: 93.368661ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-25T21:12:04.827927Z","caller":"traceutil/trace.go:171","msg":"trace[423696568] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"100.819564ms","start":"2023-10-25T21:12:04.727086Z","end":"2023-10-25T21:12:04.827906Z","steps":["trace[423696568] 'process raft request'  (duration: 100.73397ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:12:04.827999Z","caller":"traceutil/trace.go:171","msg":"trace[1786168989] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"101.021154ms","start":"2023-10-25T21:12:04.726957Z","end":"2023-10-25T21:12:04.827978Z","steps":["trace[1786168989] 'process raft request'  (duration: 99.820098ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:12:04.828161Z","caller":"traceutil/trace.go:171","msg":"trace[969602590] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"101.041772ms","start":"2023-10-25T21:12:04.727107Z","end":"2023-10-25T21:12:04.828149Z","steps":["trace[969602590] 'process raft request'  (duration: 100.740708ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:12:04.828206Z","caller":"traceutil/trace.go:171","msg":"trace[2091919188] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"101.168732ms","start":"2023-10-25T21:12:04.727027Z","end":"2023-10-25T21:12:04.828196Z","steps":["trace[2091919188] 'process raft request'  (duration: 100.753583ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:04.828339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.188233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-25T21:12:04.828378Z","caller":"traceutil/trace.go:171","msg":"trace[841340284] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:424; }","duration":"101.239217ms","start":"2023-10-25T21:12:04.72713Z","end":"2023-10-25T21:12:04.828369Z","steps":["trace[841340284] 'agreement among raft nodes before linearized reading'  (duration: 101.143963ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:13:15.982032Z","caller":"traceutil/trace.go:171","msg":"trace[494949069] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"122.196482ms","start":"2023-10-25T21:13:15.859815Z","end":"2023-10-25T21:13:15.982011Z","steps":["trace[494949069] 'process raft request'  (duration: 59.54658ms)","trace[494949069] 'compare'  (duration: 62.557567ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [7cba49f7404b6f22d6585bd4a1b8628a39648fc0075af251e5a53bbd5c197034] <==
	* 2023/10/25 21:13:12 GCP Auth Webhook started!
	2023/10/25 21:13:23 Ready to marshal response ...
	2023/10/25 21:13:23 Ready to write response ...
	2023/10/25 21:13:28 Ready to marshal response ...
	2023/10/25 21:13:28 Ready to write response ...
	2023/10/25 21:13:29 Ready to marshal response ...
	2023/10/25 21:13:29 Ready to write response ...
	2023/10/25 21:13:29 Ready to marshal response ...
	2023/10/25 21:13:29 Ready to write response ...
	2023/10/25 21:13:31 Ready to marshal response ...
	2023/10/25 21:13:31 Ready to write response ...
	2023/10/25 21:13:36 Ready to marshal response ...
	2023/10/25 21:13:36 Ready to write response ...
	2023/10/25 21:14:20 Ready to marshal response ...
	2023/10/25 21:14:20 Ready to write response ...
	2023/10/25 21:14:43 Ready to marshal response ...
	2023/10/25 21:14:43 Ready to write response ...
	2023/10/25 21:15:53 Ready to marshal response ...
	2023/10/25 21:15:53 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:16:03 up 58 min,  0 users,  load average: 0.24, 0.64, 0.34
	Linux addons-276457 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5f7b376083c552e6a3037924a190823a01c9f75128f40fda99fe66afc09b5fd5] <==
	* I1025 21:14:02.779314       1 main.go:227] handling current node
	I1025 21:14:12.782183       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:14:12.782204       1 main.go:227] handling current node
	I1025 21:14:22.793988       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:14:22.794014       1 main.go:227] handling current node
	I1025 21:14:32.797015       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:14:32.797037       1 main.go:227] handling current node
	I1025 21:14:42.809176       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:14:42.809197       1 main.go:227] handling current node
	I1025 21:14:52.817090       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:14:52.817110       1 main.go:227] handling current node
	I1025 21:15:02.829145       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:15:02.829165       1 main.go:227] handling current node
	I1025 21:15:12.837201       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:15:12.837223       1 main.go:227] handling current node
	I1025 21:15:22.846377       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:15:22.846398       1 main.go:227] handling current node
	I1025 21:15:32.849796       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:15:32.849818       1 main.go:227] handling current node
	I1025 21:15:42.853365       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:15:42.853390       1 main.go:227] handling current node
	I1025 21:15:52.865496       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:15:52.865519       1 main.go:227] handling current node
	I1025 21:16:02.877719       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:16:02.877753       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [7bd55dfbf9f638594bd4e3dc7a593f548a86bb9472bda78a0e2308cc6278c607] <==
	* I1025 21:13:31.900764       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.38.118"}
	I1025 21:13:44.348584       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1025 21:13:51.731109       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1025 21:13:52.935043       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1025 21:14:30.323146       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1025 21:14:58.664123       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:14:58.664181       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:14:58.670136       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:14:58.670192       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:14:58.678588       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:14:58.678631       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:14:58.678676       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:14:58.678711       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:14:58.687101       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:14:58.687229       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:14:58.691382       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:14:58.691490       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:14:58.699584       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:14:58.699633       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:14:58.700172       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:14:58.700197       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1025 21:14:59.679360       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1025 21:14:59.700421       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1025 21:14:59.737047       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1025 21:15:53.251145       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.48.108"}
	
	* 
	* ==> kube-controller-manager [375b113702a156fe1ccf54013e40938a8a1c8cc66b19265a04d47d2f0372677a] <==
	* W1025 21:15:17.751157       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:15:17.751186       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:15:18.083682       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:15:18.083716       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:15:19.802772       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:15:19.802798       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:15:29.706115       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:15:29.706142       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:15:36.505262       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:15:36.505294       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:15:40.842225       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:15:40.842255       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:15:43.181414       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:15:43.181450       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1025 21:15:53.085275       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1025 21:15:53.095449       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-ttctj"
	I1025 21:15:53.099693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.54329ms"
	I1025 21:15:53.103560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="3.824026ms"
	I1025 21:15:53.103634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.095µs"
	I1025 21:15:53.112556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="86.945µs"
	I1025 21:15:55.080745       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1025 21:15:55.081528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="7.958µs"
	I1025 21:15:55.084593       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1025 21:15:55.271699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.727876ms"
	I1025 21:15:55.271843       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.428µs"
	
	* 
	* ==> kube-proxy [ce84af55679689b692ba4b8d7bb3dec0838ce24c452f4e2601e331cf53a83570] <==
	* I1025 21:12:03.628858       1 server_others.go:69] "Using iptables proxy"
	I1025 21:12:03.930680       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1025 21:12:04.332294       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 21:12:04.340788       1 server_others.go:152] "Using iptables Proxier"
	I1025 21:12:04.340902       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 21:12:04.340941       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 21:12:04.341003       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 21:12:04.341264       1 server.go:846] "Version info" version="v1.28.3"
	I1025 21:12:04.341575       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:12:04.342520       1 config.go:188] "Starting service config controller"
	I1025 21:12:04.441906       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 21:12:04.343041       1 config.go:97] "Starting endpoint slice config controller"
	I1025 21:12:04.343507       1 config.go:315] "Starting node config controller"
	I1025 21:12:04.442042       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 21:12:04.442059       1 shared_informer.go:318] Caches are synced for node config
	I1025 21:12:04.442066       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 21:12:04.442071       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 21:12:04.442076       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [03b775e92e4fbfcba74879e770c73eb07f1537c7937814e57a39084de34c1676] <==
	* E1025 21:11:44.542675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:44.542525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 21:11:44.542454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 21:11:44.542795       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 21:11:44.542466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 21:11:44.542866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 21:11:44.542566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:11:44.542989       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 21:11:44.542603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:11:44.543037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 21:11:44.543132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:44.543180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 21:11:45.352724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:45.352776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 21:11:45.432530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:45.432566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 21:11:45.438905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:11:45.438953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 21:11:45.465298       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 21:11:45.465330       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 21:11:45.484432       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:11:45.484465       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 21:11:45.531879       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 21:11:45.531910       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1025 21:11:48.036797       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 25 21:15:53 addons-276457 kubelet[1562]: I1025 21:15:53.102445    1562 memory_manager.go:346] "RemoveStaleState removing state" podUID="dcd7bf3c-50b6-4316-af65-6502373843a9" containerName="node-driver-registrar"
	Oct 25 21:15:53 addons-276457 kubelet[1562]: I1025 21:15:53.102455    1562 memory_manager.go:346] "RemoveStaleState removing state" podUID="d12a91fa-38fe-4860-9cc7-d501f764a771" containerName="task-pv-container"
	Oct 25 21:15:53 addons-276457 kubelet[1562]: I1025 21:15:53.102464    1562 memory_manager.go:346] "RemoveStaleState removing state" podUID="4209b996-75ca-4014-8e18-94ac7624feb4" containerName="csi-resizer"
	Oct 25 21:15:53 addons-276457 kubelet[1562]: I1025 21:15:53.224277    1562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d8203b60-4645-42e4-89c9-bf7db0c5e4aa-gcp-creds\") pod \"hello-world-app-5d77478584-ttctj\" (UID: \"d8203b60-4645-42e4-89c9-bf7db0c5e4aa\") " pod="default/hello-world-app-5d77478584-ttctj"
	Oct 25 21:15:53 addons-276457 kubelet[1562]: I1025 21:15:53.224339    1562 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6x5b\" (UniqueName: \"kubernetes.io/projected/d8203b60-4645-42e4-89c9-bf7db0c5e4aa-kube-api-access-f6x5b\") pod \"hello-world-app-5d77478584-ttctj\" (UID: \"d8203b60-4645-42e4-89c9-bf7db0c5e4aa\") " pod="default/hello-world-app-5d77478584-ttctj"
	Oct 25 21:15:53 addons-276457 kubelet[1562]: W1025 21:15:53.483021    1562 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359/crio-d83bb02e4bc94e49d6a999d404198912712b3495bfaef92da8c9b315385f4753 WatchSource:0}: Error finding container d83bb02e4bc94e49d6a999d404198912712b3495bfaef92da8c9b315385f4753: Status 404 returned error can't find the container with id d83bb02e4bc94e49d6a999d404198912712b3495bfaef92da8c9b315385f4753
	Oct 25 21:15:54 addons-276457 kubelet[1562]: I1025 21:15:54.255032    1562 scope.go:117] "RemoveContainer" containerID="fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d"
	Oct 25 21:15:54 addons-276457 kubelet[1562]: I1025 21:15:54.272223    1562 scope.go:117] "RemoveContainer" containerID="fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d"
	Oct 25 21:15:54 addons-276457 kubelet[1562]: E1025 21:15:54.272668    1562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d\": container with ID starting with fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d not found: ID does not exist" containerID="fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d"
	Oct 25 21:15:54 addons-276457 kubelet[1562]: I1025 21:15:54.272716    1562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d"} err="failed to get container status \"fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d\": rpc error: code = NotFound desc = could not find container \"fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d\": container with ID starting with fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d not found: ID does not exist"
	Oct 25 21:15:54 addons-276457 kubelet[1562]: I1025 21:15:54.334082    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwhd5\" (UniqueName: \"kubernetes.io/projected/b61b20cf-d8fa-4d4d-bcba-1a241bd163c5-kube-api-access-lwhd5\") pod \"b61b20cf-d8fa-4d4d-bcba-1a241bd163c5\" (UID: \"b61b20cf-d8fa-4d4d-bcba-1a241bd163c5\") "
	Oct 25 21:15:54 addons-276457 kubelet[1562]: I1025 21:15:54.335888    1562 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b61b20cf-d8fa-4d4d-bcba-1a241bd163c5-kube-api-access-lwhd5" (OuterVolumeSpecName: "kube-api-access-lwhd5") pod "b61b20cf-d8fa-4d4d-bcba-1a241bd163c5" (UID: "b61b20cf-d8fa-4d4d-bcba-1a241bd163c5"). InnerVolumeSpecName "kube-api-access-lwhd5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 21:15:54 addons-276457 kubelet[1562]: I1025 21:15:54.434465    1562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lwhd5\" (UniqueName: \"kubernetes.io/projected/b61b20cf-d8fa-4d4d-bcba-1a241bd163c5-kube-api-access-lwhd5\") on node \"addons-276457\" DevicePath \"\""
	Oct 25 21:15:55 addons-276457 kubelet[1562]: I1025 21:15:55.265892    1562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-ttctj" podStartSLOduration=1.072454 podCreationTimestamp="2023-10-25 21:15:53 +0000 UTC" firstStartedPulling="2023-10-25 21:15:53.485753604 +0000 UTC m=+246.265643110" lastFinishedPulling="2023-10-25 21:15:54.679149038 +0000 UTC m=+247.459038542" observedRunningTime="2023-10-25 21:15:55.265771269 +0000 UTC m=+248.045660780" watchObservedRunningTime="2023-10-25 21:15:55.265849432 +0000 UTC m=+248.045738943"
	Oct 25 21:15:55 addons-276457 kubelet[1562]: I1025 21:15:55.345925    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6e46e1d6-19b9-4493-8dc5-a46173e824e7" path="/var/lib/kubelet/pods/6e46e1d6-19b9-4493-8dc5-a46173e824e7/volumes"
	Oct 25 21:15:55 addons-276457 kubelet[1562]: I1025 21:15:55.346258    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a455a210-b8f9-4e5f-ba6b-a9e4a3af550d" path="/var/lib/kubelet/pods/a455a210-b8f9-4e5f-ba6b-a9e4a3af550d/volumes"
	Oct 25 21:15:55 addons-276457 kubelet[1562]: I1025 21:15:55.346558    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b61b20cf-d8fa-4d4d-bcba-1a241bd163c5" path="/var/lib/kubelet/pods/b61b20cf-d8fa-4d4d-bcba-1a241bd163c5/volumes"
	Oct 25 21:15:58 addons-276457 kubelet[1562]: I1025 21:15:58.356204    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98bd1c83-3689-4997-9715-385d3f107b22-webhook-cert\") pod \"98bd1c83-3689-4997-9715-385d3f107b22\" (UID: \"98bd1c83-3689-4997-9715-385d3f107b22\") "
	Oct 25 21:15:58 addons-276457 kubelet[1562]: I1025 21:15:58.356279    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf62c\" (UniqueName: \"kubernetes.io/projected/98bd1c83-3689-4997-9715-385d3f107b22-kube-api-access-gf62c\") pod \"98bd1c83-3689-4997-9715-385d3f107b22\" (UID: \"98bd1c83-3689-4997-9715-385d3f107b22\") "
	Oct 25 21:15:58 addons-276457 kubelet[1562]: I1025 21:15:58.358028    1562 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bd1c83-3689-4997-9715-385d3f107b22-kube-api-access-gf62c" (OuterVolumeSpecName: "kube-api-access-gf62c") pod "98bd1c83-3689-4997-9715-385d3f107b22" (UID: "98bd1c83-3689-4997-9715-385d3f107b22"). InnerVolumeSpecName "kube-api-access-gf62c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 21:15:58 addons-276457 kubelet[1562]: I1025 21:15:58.358076    1562 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bd1c83-3689-4997-9715-385d3f107b22-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "98bd1c83-3689-4997-9715-385d3f107b22" (UID: "98bd1c83-3689-4997-9715-385d3f107b22"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:15:58 addons-276457 kubelet[1562]: I1025 21:15:58.457509    1562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gf62c\" (UniqueName: \"kubernetes.io/projected/98bd1c83-3689-4997-9715-385d3f107b22-kube-api-access-gf62c\") on node \"addons-276457\" DevicePath \"\""
	Oct 25 21:15:58 addons-276457 kubelet[1562]: I1025 21:15:58.457563    1562 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98bd1c83-3689-4997-9715-385d3f107b22-webhook-cert\") on node \"addons-276457\" DevicePath \"\""
	Oct 25 21:15:59 addons-276457 kubelet[1562]: I1025 21:15:59.268878    1562 scope.go:117] "RemoveContainer" containerID="f802f74ec158f2b426b16f988d765ca29a8a7778152f8d062b4fe080300ab823"
	Oct 25 21:15:59 addons-276457 kubelet[1562]: I1025 21:15:59.346656    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="98bd1c83-3689-4997-9715-385d3f107b22" path="/var/lib/kubelet/pods/98bd1c83-3689-4997-9715-385d3f107b22/volumes"
	
	* 
	* ==> storage-provisioner [992887bfba63158b15b41b7b4c6c040773c88e0814e92b79b9186090a6a838b6] <==
	* I1025 21:12:34.111323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 21:12:34.128290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 21:12:34.128342       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 21:12:34.136801       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 21:12:34.136960       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-276457_050db566-dd91-4398-8055-640c4bf9f606!
	I1025 21:12:34.137967       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a304e69-571b-4a80-abf0-c4402a8dbfb2", APIVersion:"v1", ResourceVersion:"864", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-276457_050db566-dd91-4398-8055-640c4bf9f606 became leader
	I1025 21:12:34.237193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-276457_050db566-dd91-4398-8055-640c4bf9f606!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-276457 -n addons-276457
helpers_test.go:261: (dbg) Run:  kubectl --context addons-276457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (157.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-276457 --alsologtostderr -v=1
addons_test.go:823: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-276457 --alsologtostderr -v=1: exit status 11 (264.637851ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:13:42.800783   30168 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:13:42.801068   30168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:13:42.801081   30168 out.go:309] Setting ErrFile to fd 2...
	I1025 21:13:42.801090   30168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:13:42.801298   30168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:13:42.801618   30168 mustload.go:65] Loading cluster: addons-276457
	I1025 21:13:42.802018   30168 config.go:182] Loaded profile config "addons-276457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:13:42.802043   30168 addons.go:594] checking whether the cluster is paused
	I1025 21:13:42.802152   30168 config.go:182] Loaded profile config "addons-276457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:13:42.802169   30168 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:13:42.802638   30168 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:13:42.819156   30168 ssh_runner.go:195] Run: systemctl --version
	I1025 21:13:42.819217   30168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:13:42.834785   30168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:13:42.922083   30168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 21:13:42.922151   30168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 21:13:42.956461   30168 cri.go:89] found id: "5c07f2bb09f2c711f9b419517eb6f930ca12a30b2d465c5ea2a6cfbe58d30646"
	I1025 21:13:42.956481   30168 cri.go:89] found id: "8aa314d9326c7f25b4ae2d6b734884de3fbaaa71b15f3fdeb804e89f30ea1ad9"
	I1025 21:13:42.956485   30168 cri.go:89] found id: "5c5a5cd75482bb73793c73dc377b16a6e4c22182cfc1c6bd5c8ef8e0156010ff"
	I1025 21:13:42.956489   30168 cri.go:89] found id: "c85826666b363181e9ae22047f3d4a64e216ae7ca8f8562b08624b60fa9508f1"
	I1025 21:13:42.956493   30168 cri.go:89] found id: "95e9254a51e5ad6619a6bd647a99ba4866231cdcc1b3546aaa06857328ac7841"
	I1025 21:13:42.956497   30168 cri.go:89] found id: "d61a09942e5d9ae3d82dc4b8b634c2f94f6d75f2d870ab45b5454befe7e42136"
	I1025 21:13:42.956500   30168 cri.go:89] found id: "88022f9ec005f546e0e3ec2d03250930b66b5cbd00d4b9b9951220a4b585a90e"
	I1025 21:13:42.956503   30168 cri.go:89] found id: "cae7a123866f87e1d8fb589192284fc130febc6f0c010a7d28ef1d233d853f98"
	I1025 21:13:42.956506   30168 cri.go:89] found id: "f3ba462392ca9c5385e38c2a083bbf9ddfda9f1167921d09881f70bd9ced25e6"
	I1025 21:13:42.956511   30168 cri.go:89] found id: "fc4d5bed77ab917b3ceadbb74308106640eee9431d50a76ee5c9987a704ca49d"
	I1025 21:13:42.956514   30168 cri.go:89] found id: "1e92b33ba54324086bf7dc1df7fcdb3e1f4e444fa861a0b23d641d2a7b3a8f06"
	I1025 21:13:42.956518   30168 cri.go:89] found id: "f44a7896bf35928272ab5a54c98a49a4eb190b1c4c9acea9a0af42eac42ca66a"
	I1025 21:13:42.956521   30168 cri.go:89] found id: "806b39c75be961f06871274a5780d28fdedd254f968386fa4e3d8bab27d25c85"
	I1025 21:13:42.956526   30168 cri.go:89] found id: "75ff7ad1772cd4cc641c27d81dc0ba3f5ae883af114601ed60edd0fe5e91f539"
	I1025 21:13:42.956529   30168 cri.go:89] found id: "992887bfba63158b15b41b7b4c6c040773c88e0814e92b79b9186090a6a838b6"
	I1025 21:13:42.956535   30168 cri.go:89] found id: "ce84af55679689b692ba4b8d7bb3dec0838ce24c452f4e2601e331cf53a83570"
	I1025 21:13:42.956539   30168 cri.go:89] found id: "5f7b376083c552e6a3037924a190823a01c9f75128f40fda99fe66afc09b5fd5"
	I1025 21:13:42.956543   30168 cri.go:89] found id: "03b775e92e4fbfcba74879e770c73eb07f1537c7937814e57a39084de34c1676"
	I1025 21:13:42.956546   30168 cri.go:89] found id: "7b230889788035517ae74d3b3c3ee09dd49a3de32493db3e17276fbbc8f68a57"
	I1025 21:13:42.956549   30168 cri.go:89] found id: "7bd55dfbf9f638594bd4e3dc7a593f548a86bb9472bda78a0e2308cc6278c607"
	I1025 21:13:42.956554   30168 cri.go:89] found id: "375b113702a156fe1ccf54013e40938a8a1c8cc66b19265a04d47d2f0372677a"
	I1025 21:13:42.956558   30168 cri.go:89] found id: ""
	I1025 21:13:42.956595   30168 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 21:13:42.993742   30168 out.go:177] 
	W1025 21:13:42.995416   30168 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-25T21:13:42Z" level=error msg="stat /run/runc/e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-25T21:13:42Z" level=error msg="stat /run/runc/e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0: no such file or directory"
	
	W1025 21:13:42.995444   30168 out.go:239] * 
	* 
	W1025 21:13:42.997228   30168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:13:42.998820   30168 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:825: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-276457 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-276457
helpers_test.go:235: (dbg) docker inspect addons-276457:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359",
	        "Created": "2023-10-25T21:11:34.012909531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19897,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-25T21:11:34.311715955Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359/hosts",
	        "LogPath": "/var/lib/docker/containers/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359/8f1787a5dda684269d563bfd3f34339fa1c9e073fbbaba1ac14766eb9d10c359-json.log",
	        "Name": "/addons-276457",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-276457:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-276457",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d8cee1d16751d821163918220fe6d87821b97f56ce269eba65b96107ddc32555-init/diff:/var/lib/docker/overlay2/08f48c2099646ae35740a1c0f07609c9eefd4a79bbbda6d2c067385f70ad62be/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8cee1d16751d821163918220fe6d87821b97f56ce269eba65b96107ddc32555/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8cee1d16751d821163918220fe6d87821b97f56ce269eba65b96107ddc32555/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8cee1d16751d821163918220fe6d87821b97f56ce269eba65b96107ddc32555/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-276457",
	                "Source": "/var/lib/docker/volumes/addons-276457/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-276457",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-276457",
	                "name.minikube.sigs.k8s.io": "addons-276457",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c49c12eb175ca66a7f1c77a210afd495bdfada186779c43fc500aebe65e2d5d6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c49c12eb175c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-276457": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8f1787a5dda6",
	                        "addons-276457"
	                    ],
	                    "NetworkID": "ae6db73bce4272b8f387205e6fdf52e5e623531737d5981b3d82412778f26063",
	                    "EndpointID": "bf50b40c9bc4aecf2b8883464e01e67b95e44d9f6c6463ef5d06e0de62a7dbc6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-276457 -n addons-276457
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-276457 logs -n 25: (1.679267799s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-868023   | jenkins | v1.31.2 | 25 Oct 23 21:10 UTC |                     |
	|         | -p download-only-868023                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-868023   | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |                     |
	|         | -p download-only-868023                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:11 UTC |
	| delete  | -p download-only-868023                                                                     | download-only-868023   | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:11 UTC |
	| delete  | -p download-only-868023                                                                     | download-only-868023   | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:11 UTC |
	| start   | --download-only -p                                                                          | download-docker-264376 | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |                     |
	|         | download-docker-264376                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-264376                                                                   | download-docker-264376 | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:11 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-856759   | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |                     |
	|         | binary-mirror-856759                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40837                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-856759                                                                     | binary-mirror-856759   | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:11 UTC |
	| addons  | disable dashboard -p                                                                        | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |                     |
	|         | addons-276457                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |                     |
	|         | addons-276457                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-276457 --wait=true                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC | 25 Oct 23 21:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | addons-276457                                                                               |                        |         |         |                     |                     |
	| addons  | addons-276457 addons disable                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-276457 ip                                                                            | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	| addons  | addons-276457 addons disable                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-276457 ssh cat                                                                       | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | /opt/local-path-provisioner/pvc-b62d5b0e-4bb9-43b8-94d3-0062132da2ef_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | -p addons-276457                                                                            |                        |         |         |                     |                     |
	| addons  | addons-276457 addons disable                                                                | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-276457 ssh curl -s                                                                   | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC | 25 Oct 23 21:13 UTC |
	|         | addons-276457                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-276457          | jenkins | v1.31.2 | 25 Oct 23 21:13 UTC |                     |
	|         | -p addons-276457                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:11:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:11:10.050783   19225 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:11:10.050950   19225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:11:10.050962   19225 out.go:309] Setting ErrFile to fd 2...
	I1025 21:11:10.050970   19225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:11:10.051164   19225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:11:10.051828   19225 out.go:303] Setting JSON to false
	I1025 21:11:10.052679   19225 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3219,"bootTime":1698265051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:11:10.052740   19225 start.go:138] virtualization: kvm guest
	I1025 21:11:10.054996   19225 out.go:177] * [addons-276457] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:11:10.056579   19225 notify.go:220] Checking for updates...
	I1025 21:11:10.056596   19225 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:11:10.057963   19225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:11:10.059324   19225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:11:10.060885   19225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:11:10.062234   19225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:11:10.063553   19225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:11:10.064991   19225 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:11:10.084355   19225 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:11:10.084414   19225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:11:10.133669   19225 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-10-25 21:11:10.125557202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:11:10.133776   19225 docker.go:295] overlay module found
	I1025 21:11:10.135786   19225 out.go:177] * Using the docker driver based on user configuration
	I1025 21:11:10.137496   19225 start.go:298] selected driver: docker
	I1025 21:11:10.137512   19225 start.go:902] validating driver "docker" against <nil>
	I1025 21:11:10.137522   19225 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:11:10.138222   19225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:11:10.183559   19225 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-10-25 21:11:10.175934542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:11:10.183747   19225 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 21:11:10.183960   19225 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:11:10.185920   19225 out.go:177] * Using Docker driver with root privileges
	I1025 21:11:10.187646   19225 cni.go:84] Creating CNI manager for ""
	I1025 21:11:10.187664   19225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:11:10.187677   19225 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 21:11:10.187708   19225 start_flags.go:323] config:
	{Name:addons-276457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-276457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:11:10.189408   19225 out.go:177] * Starting control plane node addons-276457 in cluster addons-276457
	I1025 21:11:10.190778   19225 cache.go:121] Beginning downloading kic base image for docker with crio
	I1025 21:11:10.192127   19225 out.go:177] * Pulling base image ...
	I1025 21:11:10.193422   19225 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:11:10.193453   19225 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1025 21:11:10.193465   19225 cache.go:56] Caching tarball of preloaded images
	I1025 21:11:10.193517   19225 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:11:10.193571   19225 preload.go:174] Found /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 21:11:10.193585   19225 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1025 21:11:10.193926   19225 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/config.json ...
	I1025 21:11:10.193951   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/config.json: {Name:mk3778d29ed7a141fa579ee04d35ac0a42340c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:10.208155   19225 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 21:11:10.208264   19225 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 21:11:10.208280   19225 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1025 21:11:10.208285   19225 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1025 21:11:10.208295   19225 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1025 21:11:10.208300   19225 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from local cache
	I1025 21:11:21.137474   19225 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from cached tarball
	I1025 21:11:21.137507   19225 cache.go:194] Successfully downloaded all kic artifacts
	I1025 21:11:21.137534   19225 start.go:365] acquiring machines lock for addons-276457: {Name:mka6aae137d3f666d1cab21763ad542057ba8ff4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:11:21.137621   19225 start.go:369] acquired machines lock for "addons-276457" in 70.356µs
	I1025 21:11:21.137648   19225 start.go:93] Provisioning new machine with config: &{Name:addons-276457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-276457 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:11:21.137717   19225 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:11:21.139838   19225 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1025 21:11:21.140059   19225 start.go:159] libmachine.API.Create for "addons-276457" (driver="docker")
	I1025 21:11:21.140083   19225 client.go:168] LocalClient.Create starting
	I1025 21:11:21.140180   19225 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem
	I1025 21:11:21.266029   19225 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem
	I1025 21:11:21.474841   19225 cli_runner.go:164] Run: docker network inspect addons-276457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:11:21.489833   19225 cli_runner.go:211] docker network inspect addons-276457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:11:21.489889   19225 network_create.go:281] running [docker network inspect addons-276457] to gather additional debugging logs...
	I1025 21:11:21.489909   19225 cli_runner.go:164] Run: docker network inspect addons-276457
	W1025 21:11:21.503435   19225 cli_runner.go:211] docker network inspect addons-276457 returned with exit code 1
	I1025 21:11:21.503460   19225 network_create.go:284] error running [docker network inspect addons-276457]: docker network inspect addons-276457: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-276457 not found
	I1025 21:11:21.503476   19225 network_create.go:286] output of [docker network inspect addons-276457]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-276457 not found
	
	** /stderr **
	I1025 21:11:21.503563   19225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:11:21.517895   19225 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023eaa20}
	I1025 21:11:21.517936   19225 network_create.go:124] attempt to create docker network addons-276457 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:11:21.517970   19225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-276457 addons-276457
	I1025 21:11:21.566768   19225 network_create.go:108] docker network addons-276457 192.168.49.0/24 created
	I1025 21:11:21.566795   19225 kic.go:118] calculated static IP "192.168.49.2" for the "addons-276457" container
	I1025 21:11:21.566845   19225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:11:21.581214   19225 cli_runner.go:164] Run: docker volume create addons-276457 --label name.minikube.sigs.k8s.io=addons-276457 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:11:21.597035   19225 oci.go:103] Successfully created a docker volume addons-276457
	I1025 21:11:21.597118   19225 cli_runner.go:164] Run: docker run --rm --name addons-276457-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-276457 --entrypoint /usr/bin/test -v addons-276457:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 21:11:28.812221   19225 cli_runner.go:217] Completed: docker run --rm --name addons-276457-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-276457 --entrypoint /usr/bin/test -v addons-276457:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (7.215052344s)
	I1025 21:11:28.812255   19225 oci.go:107] Successfully prepared a docker volume addons-276457
	I1025 21:11:28.812281   19225 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:11:28.812303   19225 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 21:11:28.812359   19225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-276457:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 21:11:33.947727   19225 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-276457:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.135303636s)
	I1025 21:11:33.947758   19225 kic.go:200] duration metric: took 5.135453 seconds to extract preloaded images to volume
	W1025 21:11:33.947908   19225 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 21:11:33.948005   19225 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 21:11:33.999394   19225 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-276457 --name addons-276457 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-276457 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-276457 --network addons-276457 --ip 192.168.49.2 --volume addons-276457:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 21:11:34.319460   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Running}}
	I1025 21:11:34.337242   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:11:34.353775   19225 cli_runner.go:164] Run: docker exec addons-276457 stat /var/lib/dpkg/alternatives/iptables
	I1025 21:11:34.392061   19225 oci.go:144] the created container "addons-276457" has a running status.
	I1025 21:11:34.392097   19225 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa...
	I1025 21:11:34.624266   19225 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 21:11:34.645085   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:11:34.667614   19225 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 21:11:34.667635   19225 kic_runner.go:114] Args: [docker exec --privileged addons-276457 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 21:11:34.739948   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:11:34.756615   19225 machine.go:88] provisioning docker machine ...
	I1025 21:11:34.756651   19225 ubuntu.go:169] provisioning hostname "addons-276457"
	I1025 21:11:34.756705   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:34.782585   19225 main.go:141] libmachine: Using SSH client type: native
	I1025 21:11:34.782944   19225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1025 21:11:34.782959   19225 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-276457 && echo "addons-276457" | sudo tee /etc/hostname
	I1025 21:11:34.947870   19225 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-276457
	
	I1025 21:11:34.947950   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:34.965434   19225 main.go:141] libmachine: Using SSH client type: native
	I1025 21:11:34.965918   19225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1025 21:11:34.965947   19225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-276457' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-276457/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-276457' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:11:35.081823   19225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:11:35.081848   19225 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17488-11542/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-11542/.minikube}
	I1025 21:11:35.081870   19225 ubuntu.go:177] setting up certificates
	I1025 21:11:35.081879   19225 provision.go:83] configureAuth start
	I1025 21:11:35.081930   19225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-276457
	I1025 21:11:35.097634   19225 provision.go:138] copyHostCerts
	I1025 21:11:35.097692   19225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem (1123 bytes)
	I1025 21:11:35.097792   19225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem (1675 bytes)
	I1025 21:11:35.097889   19225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem (1078 bytes)
	I1025 21:11:35.097934   19225 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem org=jenkins.addons-276457 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-276457]
	I1025 21:11:35.320075   19225 provision.go:172] copyRemoteCerts
	I1025 21:11:35.320122   19225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:11:35.320151   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.335869   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:11:35.426027   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 21:11:35.445848   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 21:11:35.465535   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1025 21:11:35.484665   19225 provision.go:86] duration metric: configureAuth took 402.773332ms
	I1025 21:11:35.484690   19225 ubuntu.go:193] setting minikube options for container-runtime
	I1025 21:11:35.484848   19225 config.go:182] Loaded profile config "addons-276457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:11:35.484953   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.500482   19225 main.go:141] libmachine: Using SSH client type: native
	I1025 21:11:35.500843   19225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1025 21:11:35.500862   19225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 21:11:35.699473   19225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 21:11:35.699502   19225 machine.go:91] provisioned docker machine in 942.864685ms
	I1025 21:11:35.699514   19225 client.go:171] LocalClient.Create took 14.559422537s
	I1025 21:11:35.699531   19225 start.go:167] duration metric: libmachine.API.Create for "addons-276457" took 14.559471187s
	I1025 21:11:35.699540   19225 start.go:300] post-start starting for "addons-276457" (driver="docker")
	I1025 21:11:35.699554   19225 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:11:35.699634   19225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:11:35.699685   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.715270   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:11:35.802021   19225 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:11:35.804742   19225 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 21:11:35.804770   19225 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 21:11:35.804779   19225 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 21:11:35.804785   19225 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 21:11:35.804793   19225 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/addons for local assets ...
	I1025 21:11:35.804838   19225 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/files for local assets ...
	I1025 21:11:35.804860   19225 start.go:303] post-start completed in 105.312939ms
	I1025 21:11:35.805100   19225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-276457
	I1025 21:11:35.820277   19225 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/config.json ...
	I1025 21:11:35.820493   19225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:11:35.820529   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.835830   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:11:35.922515   19225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:11:35.926315   19225 start.go:128] duration metric: createHost completed in 14.788586203s
	I1025 21:11:35.926333   19225 start.go:83] releasing machines lock for "addons-276457", held for 14.788701726s
	I1025 21:11:35.926409   19225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-276457
	I1025 21:11:35.941586   19225 ssh_runner.go:195] Run: cat /version.json
	I1025 21:11:35.941626   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.941664   19225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:11:35.941717   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:11:35.958635   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:11:35.959381   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:11:36.142609   19225 ssh_runner.go:195] Run: systemctl --version
	I1025 21:11:36.146463   19225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 21:11:36.280603   19225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 21:11:36.284615   19225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:11:36.300625   19225 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1025 21:11:36.300712   19225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:11:36.325528   19225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 21:11:36.325554   19225 start.go:472] detecting cgroup driver to use...
	I1025 21:11:36.325593   19225 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 21:11:36.325637   19225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 21:11:36.337875   19225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:11:36.347343   19225 docker.go:198] disabling cri-docker service (if available) ...
	I1025 21:11:36.347389   19225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:11:36.358902   19225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:11:36.370762   19225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 21:11:36.446974   19225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:11:36.530660   19225 docker.go:214] disabling docker service ...
	I1025 21:11:36.530727   19225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:11:36.546727   19225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:11:36.556324   19225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:11:36.633008   19225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:11:36.710165   19225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:11:36.720097   19225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:11:36.733193   19225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 21:11:36.733237   19225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:11:36.741031   19225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 21:11:36.741079   19225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:11:36.748698   19225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:11:36.756434   19225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:11:36.764403   19225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 21:11:36.771721   19225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 21:11:36.778419   19225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 21:11:36.784921   19225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:11:36.855096   19225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 21:11:36.964048   19225 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 21:11:36.964145   19225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 21:11:36.967359   19225 start.go:540] Will wait 60s for crictl version
	I1025 21:11:36.967399   19225 ssh_runner.go:195] Run: which crictl
	I1025 21:11:36.970102   19225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 21:11:37.000915   19225 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1025 21:11:37.001025   19225 ssh_runner.go:195] Run: crio --version
	I1025 21:11:37.032435   19225 ssh_runner.go:195] Run: crio --version
	I1025 21:11:37.065447   19225 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1025 21:11:37.066955   19225 cli_runner.go:164] Run: docker network inspect addons-276457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:11:37.082427   19225 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 21:11:37.085878   19225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:11:37.095445   19225 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:11:37.095513   19225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:11:37.147911   19225 crio.go:496] all images are preloaded for cri-o runtime.
	I1025 21:11:37.147936   19225 crio.go:415] Images already preloaded, skipping extraction
	I1025 21:11:37.147993   19225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:11:37.176883   19225 crio.go:496] all images are preloaded for cri-o runtime.
	I1025 21:11:37.176902   19225 cache_images.go:84] Images are preloaded, skipping loading
	I1025 21:11:37.176959   19225 ssh_runner.go:195] Run: crio config
	I1025 21:11:37.216315   19225 cni.go:84] Creating CNI manager for ""
	I1025 21:11:37.216334   19225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:11:37.216349   19225 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 21:11:37.216364   19225 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-276457 NodeName:addons-276457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 21:11:37.216480   19225 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-276457"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 21:11:37.216544   19225 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-276457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-276457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 21:11:37.216590   19225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 21:11:37.224247   19225 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 21:11:37.224307   19225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 21:11:37.231565   19225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1025 21:11:37.245897   19225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 21:11:37.260252   19225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1025 21:11:37.274674   19225 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 21:11:37.277425   19225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:11:37.286125   19225 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457 for IP: 192.168.49.2
	I1025 21:11:37.286157   19225 certs.go:190] acquiring lock for shared ca certs: {Name:mk35413dbabac2652d1fa66d4e17d237360108a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.286271   19225 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key
	I1025 21:11:37.366588   19225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt ...
	I1025 21:11:37.366614   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt: {Name:mkefe46340403c86f272053d2be94b125b0e830e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.366771   19225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key ...
	I1025 21:11:37.366781   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key: {Name:mke1b03fa8b0a61edd372405bab4cc2e83047e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.366846   19225 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key
	I1025 21:11:37.582977   19225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt ...
	I1025 21:11:37.583001   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt: {Name:mkfd638367e0523ada76601355cf5b82c5609ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.583157   19225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key ...
	I1025 21:11:37.583167   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key: {Name:mk41a913670aa409f35f53803f3e356eb2c82175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.583262   19225 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.key
	I1025 21:11:37.583274   19225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt with IP's: []
	I1025 21:11:37.649266   19225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt ...
	I1025 21:11:37.649294   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: {Name:mka3fa749f033f7a4bef4f320d595255d33c27bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.649437   19225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.key ...
	I1025 21:11:37.649449   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.key: {Name:mk723382aaad916e2596dc57aa70df97172720dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.649508   19225 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key.dd3b5fb2
	I1025 21:11:37.649524   19225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 21:11:37.811523   19225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt.dd3b5fb2 ...
	I1025 21:11:37.811549   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt.dd3b5fb2: {Name:mk6f846e9feb3735bec33b4b77765f793d9a50e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.811692   19225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key.dd3b5fb2 ...
	I1025 21:11:37.811702   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key.dd3b5fb2: {Name:mk0ae9d7e303f74f16bbc9aa8d97d83c2d6be466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:37.811775   19225 certs.go:337] copying /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt
	I1025 21:11:37.811848   19225 certs.go:341] copying /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key
	I1025 21:11:37.811894   19225 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.key
	I1025 21:11:37.811910   19225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.crt with IP's: []
	I1025 21:11:38.114475   19225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.crt ...
	I1025 21:11:38.114501   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.crt: {Name:mk57616c3a58ba5609f71620261fc4676b8d6794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:38.114640   19225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.key ...
	I1025 21:11:38.114650   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.key: {Name:mk3f4e0dc07bd6d6285ff2e61abd9c57717a9b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:11:38.114795   19225 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 21:11:38.114827   19225 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem (1078 bytes)
	I1025 21:11:38.114851   19225 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem (1123 bytes)
	I1025 21:11:38.114875   19225 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem (1675 bytes)
	I1025 21:11:38.115371   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 21:11:38.136227   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 21:11:38.155372   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 21:11:38.174962   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 21:11:38.194627   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 21:11:38.213849   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 21:11:38.232677   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 21:11:38.251982   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 21:11:38.271086   19225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 21:11:38.291176   19225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 21:11:38.306087   19225 ssh_runner.go:195] Run: openssl version
	I1025 21:11:38.310775   19225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 21:11:38.318832   19225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:11:38.321699   19225 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:11:38.321740   19225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:11:38.327743   19225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 21:11:38.336136   19225 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 21:11:38.339362   19225 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 21:11:38.339407   19225 kubeadm.go:404] StartCluster: {Name:addons-276457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-276457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:11:38.339490   19225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 21:11:38.339538   19225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 21:11:38.370511   19225 cri.go:89] found id: ""
	I1025 21:11:38.370573   19225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 21:11:38.377871   19225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 21:11:38.385040   19225 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 21:11:38.385090   19225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 21:11:38.392797   19225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 21:11:38.392843   19225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 21:11:38.465551   19225 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1025 21:11:38.522322   19225 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 21:11:47.437288   19225 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1025 21:11:47.437365   19225 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 21:11:47.437494   19225 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1025 21:11:47.437576   19225 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1025 21:11:47.437644   19225 kubeadm.go:322] OS: Linux
	I1025 21:11:47.437728   19225 kubeadm.go:322] CGROUPS_CPU: enabled
	I1025 21:11:47.437795   19225 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1025 21:11:47.437867   19225 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1025 21:11:47.437926   19225 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1025 21:11:47.438009   19225 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1025 21:11:47.438101   19225 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1025 21:11:47.438180   19225 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1025 21:11:47.438261   19225 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1025 21:11:47.438355   19225 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1025 21:11:47.438456   19225 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 21:11:47.438572   19225 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 21:11:47.438705   19225 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 21:11:47.438813   19225 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 21:11:47.440447   19225 out.go:204]   - Generating certificates and keys ...
	I1025 21:11:47.440551   19225 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 21:11:47.440643   19225 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 21:11:47.440742   19225 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 21:11:47.440834   19225 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 21:11:47.440929   19225 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 21:11:47.441003   19225 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 21:11:47.441079   19225 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 21:11:47.441238   19225 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-276457 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 21:11:47.441309   19225 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 21:11:47.441471   19225 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-276457 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 21:11:47.441580   19225 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 21:11:47.441674   19225 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 21:11:47.441741   19225 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 21:11:47.441816   19225 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 21:11:47.441907   19225 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 21:11:47.442010   19225 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 21:11:47.442109   19225 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 21:11:47.442194   19225 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 21:11:47.442326   19225 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 21:11:47.442423   19225 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 21:11:47.444180   19225 out.go:204]   - Booting up control plane ...
	I1025 21:11:47.444313   19225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 21:11:47.444510   19225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 21:11:47.444599   19225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 21:11:47.444718   19225 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 21:11:47.444844   19225 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 21:11:47.444906   19225 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 21:11:47.445066   19225 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 21:11:47.445144   19225 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001856 seconds
	I1025 21:11:47.445257   19225 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 21:11:47.445387   19225 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 21:11:47.445490   19225 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 21:11:47.445743   19225 kubeadm.go:322] [mark-control-plane] Marking the node addons-276457 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 21:11:47.445792   19225 kubeadm.go:322] [bootstrap-token] Using token: fbrqzi.9feo4t3e7ievi3oe
	I1025 21:11:47.447288   19225 out.go:204]   - Configuring RBAC rules ...
	I1025 21:11:47.447399   19225 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 21:11:47.447483   19225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 21:11:47.447635   19225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 21:11:47.447819   19225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 21:11:47.448014   19225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 21:11:47.448152   19225 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 21:11:47.448315   19225 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 21:11:47.448358   19225 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 21:11:47.448398   19225 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 21:11:47.448404   19225 kubeadm.go:322] 
	I1025 21:11:47.448487   19225 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 21:11:47.448501   19225 kubeadm.go:322] 
	I1025 21:11:47.448609   19225 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 21:11:47.448620   19225 kubeadm.go:322] 
	I1025 21:11:47.448656   19225 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 21:11:47.448743   19225 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 21:11:47.448822   19225 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 21:11:47.448834   19225 kubeadm.go:322] 
	I1025 21:11:47.448921   19225 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1025 21:11:47.448935   19225 kubeadm.go:322] 
	I1025 21:11:47.449003   19225 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 21:11:47.449015   19225 kubeadm.go:322] 
	I1025 21:11:47.449101   19225 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 21:11:47.449189   19225 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 21:11:47.449263   19225 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 21:11:47.449273   19225 kubeadm.go:322] 
	I1025 21:11:47.449360   19225 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 21:11:47.449459   19225 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 21:11:47.449468   19225 kubeadm.go:322] 
	I1025 21:11:47.449565   19225 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fbrqzi.9feo4t3e7ievi3oe \
	I1025 21:11:47.449685   19225 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 \
	I1025 21:11:47.449724   19225 kubeadm.go:322] 	--control-plane 
	I1025 21:11:47.449739   19225 kubeadm.go:322] 
	I1025 21:11:47.449837   19225 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 21:11:47.449849   19225 kubeadm.go:322] 
	I1025 21:11:47.449963   19225 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fbrqzi.9feo4t3e7ievi3oe \
	I1025 21:11:47.450089   19225 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 
	I1025 21:11:47.450105   19225 cni.go:84] Creating CNI manager for ""
	I1025 21:11:47.450115   19225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:11:47.451769   19225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 21:11:47.453178   19225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 21:11:47.457069   19225 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1025 21:11:47.457087   19225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 21:11:47.473600   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 21:11:48.125951   19225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 21:11:48.126037   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:48.126071   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=addons-276457 minikube.k8s.io/updated_at=2023_10_25T21_11_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:48.196147   19225 ops.go:34] apiserver oom_adj: -16
	I1025 21:11:48.196264   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:48.270917   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:48.832635   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:49.332777   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:49.832638   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:50.332067   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:50.832763   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:51.332370   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:51.832991   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:52.332631   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:52.832366   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:53.332282   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:53.832312   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:54.332056   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:54.832425   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:55.332158   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:55.832089   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:56.332186   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:56.832181   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:57.332452   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:57.832486   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:58.332510   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:58.832779   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:59.332202   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:11:59.832352   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:12:00.332048   19225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:12:00.394997   19225 kubeadm.go:1081] duration metric: took 12.26901345s to wait for elevateKubeSystemPrivileges.
	I1025 21:12:00.395028   19225 kubeadm.go:406] StartCluster complete in 22.055625096s
	I1025 21:12:00.395047   19225 settings.go:142] acquiring lock: {Name:mkdc9277e8465489704340df47f71845c1a0d579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:12:00.395151   19225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:12:00.395493   19225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/kubeconfig: {Name:mk64fd87b209032b3c81ef85df6a4de19f21a5bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:12:00.395680   19225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 21:12:00.395715   19225 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1025 21:12:00.395792   19225 addons.go:69] Setting default-storageclass=true in profile "addons-276457"
	I1025 21:12:00.395804   19225 addons.go:69] Setting volumesnapshots=true in profile "addons-276457"
	I1025 21:12:00.395808   19225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-276457"
	I1025 21:12:00.395806   19225 addons.go:69] Setting cloud-spanner=true in profile "addons-276457"
	I1025 21:12:00.395818   19225 addons.go:231] Setting addon volumesnapshots=true in "addons-276457"
	I1025 21:12:00.395833   19225 addons.go:69] Setting inspektor-gadget=true in profile "addons-276457"
	I1025 21:12:00.395847   19225 addons.go:231] Setting addon cloud-spanner=true in "addons-276457"
	I1025 21:12:00.395852   19225 addons.go:231] Setting addon inspektor-gadget=true in "addons-276457"
	I1025 21:12:00.395865   19225 config.go:182] Loaded profile config "addons-276457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:12:00.395873   19225 addons.go:69] Setting registry=true in profile "addons-276457"
	I1025 21:12:00.395897   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.395897   19225 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-276457"
	I1025 21:12:00.395894   19225 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-276457"
	I1025 21:12:00.395905   19225 addons.go:231] Setting addon registry=true in "addons-276457"
	I1025 21:12:00.395911   19225 addons.go:69] Setting helm-tiller=true in profile "addons-276457"
	I1025 21:12:00.395911   19225 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-276457"
	I1025 21:12:00.395919   19225 addons.go:69] Setting ingress=true in profile "addons-276457"
	I1025 21:12:00.395926   19225 addons.go:69] Setting ingress-dns=true in profile "addons-276457"
	I1025 21:12:00.395936   19225 addons.go:231] Setting addon ingress=true in "addons-276457"
	I1025 21:12:00.395936   19225 addons.go:231] Setting addon ingress-dns=true in "addons-276457"
	I1025 21:12:00.395941   19225 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-276457"
	I1025 21:12:00.395951   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.395976   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.395978   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.395983   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.396169   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396169   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396256   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396333   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396393   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396404   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.396431   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.395868   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.397164   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.395920   19225 addons.go:231] Setting addon helm-tiller=true in "addons-276457"
	I1025 21:12:00.395897   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.397479   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.397766   19225 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-276457"
	I1025 21:12:00.397784   19225 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-276457"
	I1025 21:12:00.397825   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.397892   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.397923   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.395888   19225 addons.go:69] Setting storage-provisioner=true in profile "addons-276457"
	I1025 21:12:00.398553   19225 addons.go:231] Setting addon storage-provisioner=true in "addons-276457"
	I1025 21:12:00.398598   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.399045   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.395905   19225 addons.go:69] Setting gcp-auth=true in profile "addons-276457"
	I1025 21:12:00.401401   19225 mustload.go:65] Loading cluster: addons-276457
	I1025 21:12:00.395904   19225 addons.go:69] Setting metrics-server=true in profile "addons-276457"
	I1025 21:12:00.402245   19225 addons.go:231] Setting addon metrics-server=true in "addons-276457"
	I1025 21:12:00.402326   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.432480   19225 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1025 21:12:00.435031   19225 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1025 21:12:00.435047   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 21:12:00.434729   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.435267   19225 config.go:182] Loaded profile config "addons-276457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:12:00.435444   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.436995   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 21:12:00.434893   19225 addons.go:231] Setting addon default-storageclass=true in "addons-276457"
	I1025 21:12:00.437039   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.438406   19225 out.go:177]   - Using image docker.io/registry:2.8.3
	I1025 21:12:00.435184   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.434848   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.437855   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.440459   19225 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-276457"
	I1025 21:12:00.440934   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.441412   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:00.447422   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 21:12:00.446005   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 21:12:00.450233   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 21:12:00.448937   19225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1025 21:12:00.449052   19225 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1025 21:12:00.452934   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 21:12:00.452951   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 21:12:00.453007   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.454670   19225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 21:12:00.454897   19225 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 21:12:00.454710   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 21:12:00.456011   19225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 21:12:00.456026   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1025 21:12:00.457362   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.457420   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 21:12:00.459429   19225 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1025 21:12:00.459370   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 21:12:00.460788   19225 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 21:12:00.460804   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1025 21:12:00.459641   19225 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 21:12:00.460845   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1025 21:12:00.460851   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.460890   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.462212   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 21:12:00.463454   19225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 21:12:00.464688   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 21:12:00.464704   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 21:12:00.464749   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.465693   19225 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-276457" context rescaled to 1 replicas
	I1025 21:12:00.465729   19225 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:12:00.467306   19225 out.go:177] * Verifying Kubernetes components...
	I1025 21:12:00.468597   19225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:12:00.475687   19225 out.go:177]   - Using image docker.io/busybox:stable
	I1025 21:12:00.476934   19225 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 21:12:00.478408   19225 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 21:12:00.478431   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 21:12:00.478487   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.483751   19225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:12:00.484991   19225 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:12:00.485012   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 21:12:00.485081   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.491119   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.493756   19225 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1025 21:12:00.496470   19225 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 21:12:00.496492   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 21:12:00.496548   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.497415   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.500339   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:00.504662   19225 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1025 21:12:00.506154   19225 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1025 21:12:00.506171   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1025 21:12:00.506218   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.507748   19225 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1025 21:12:00.508955   19225 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1025 21:12:00.508977   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1025 21:12:00.509026   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.517543   19225 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 21:12:00.517565   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 21:12:00.517613   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.524029   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.524112   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.527571   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.539375   19225 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1025 21:12:00.540885   19225 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 21:12:00.540901   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 21:12:00.540958   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:00.547839   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.548187   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.549497   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.568871   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.571534   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.573984   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.579725   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.580290   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:00.627625   19225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1025 21:12:00.628541   19225 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 21:12:00.628561   19225 retry.go:31] will retry after 369.922544ms: ssh: handshake failed: EOF
	I1025 21:12:00.628667   19225 node_ready.go:35] waiting up to 6m0s for node "addons-276457" to be "Ready" ...
	I1025 21:12:00.746895   19225 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 21:12:00.746917   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 21:12:00.756838   19225 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 21:12:00.756865   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 21:12:00.831475   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 21:12:00.926790   19225 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 21:12:00.926859   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 21:12:00.927327   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 21:12:00.930171   19225 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1025 21:12:00.930195   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1025 21:12:00.932619   19225 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 21:12:00.932684   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 21:12:01.045335   19225 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 21:12:01.045418   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 21:12:01.048084   19225 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1025 21:12:01.048150   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1025 21:12:01.133694   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 21:12:01.138100   19225 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 21:12:01.138165   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 21:12:01.140808   19225 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1025 21:12:01.140866   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1025 21:12:01.144189   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 21:12:01.229479   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 21:12:01.229556   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 21:12:01.238644   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 21:12:01.244145   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 21:12:01.329121   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:12:01.335466   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 21:12:01.340002   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 21:12:01.340077   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 21:12:01.426904   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1025 21:12:01.427108   19225 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 21:12:01.427175   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 21:12:01.527964   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 21:12:01.641601   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 21:12:01.641695   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 21:12:01.644193   19225 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1025 21:12:01.644260   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1025 21:12:01.838739   19225 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 21:12:01.838833   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 21:12:02.127809   19225 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1025 21:12:02.127891   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1025 21:12:02.228531   19225 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.600862843s)
	I1025 21:12:02.228571   19225 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 21:12:02.343596   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 21:12:02.343636   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 21:12:02.446386   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 21:12:02.446413   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 21:12:02.743473   19225 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1025 21:12:02.743551   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1025 21:12:02.831704   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:02.838093   19225 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:12:02.838159   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 21:12:02.934332   19225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 21:12:02.934430   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 21:12:03.032851   19225 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1025 21:12:03.032929   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1025 21:12:03.127754   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 21:12:03.127833   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 21:12:03.144481   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:12:03.335639   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 21:12:03.335666   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 21:12:03.449186   19225 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 21:12:03.449212   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1025 21:12:03.728207   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 21:12:03.728238   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 21:12:03.744894   19225 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1025 21:12:03.744923   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1025 21:12:03.842250   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 21:12:03.842339   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 21:12:03.940453   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1025 21:12:03.947265   19225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 21:12:03.947293   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 21:12:04.227021   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 21:12:04.645251   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.813677657s)
	I1025 21:12:04.838886   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:05.033841   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.106481506s)
	I1025 21:12:05.826832   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.693038034s)
	I1025 21:12:06.552033   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.407809737s)
	I1025 21:12:06.552068   19225 addons.go:467] Verifying addon ingress=true in "addons-276457"
	I1025 21:12:06.552121   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.313388569s)
	I1025 21:12:06.552152   19225 addons.go:467] Verifying addon registry=true in "addons-276457"
	I1025 21:12:06.552185   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.307961425s)
	I1025 21:12:06.554496   19225 out.go:177] * Verifying registry addon...
	I1025 21:12:06.552208   19225 addons.go:467] Verifying addon metrics-server=true in "addons-276457"
	I1025 21:12:06.552293   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.223125195s)
	I1025 21:12:06.552342   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.216791649s)
	I1025 21:12:06.552398   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.125463299s)
	I1025 21:12:06.552445   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.024388063s)
	I1025 21:12:06.552544   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.4079754s)
	I1025 21:12:06.556995   19225 out.go:177] * Verifying ingress addon...
	W1025 21:12:06.557024   19225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 21:12:06.558415   19225 retry.go:31] will retry after 236.202102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 21:12:06.552619   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.612130669s)
	I1025 21:12:06.556451   19225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 21:12:06.559042   19225 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 21:12:06.562546   19225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 21:12:06.562560   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:06.562757   19225 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 21:12:06.562773   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:06.629856   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:06.630115   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:06.795511   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:12:07.133505   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:07.133562   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:07.154902   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:07.332150   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.105069614s)
	I1025 21:12:07.332194   19225 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-276457"
	I1025 21:12:07.334199   19225 out.go:177] * Verifying csi-hostpath-driver addon...
	I1025 21:12:07.337237   19225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 21:12:07.338505   19225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 21:12:07.338610   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:07.340573   19225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 21:12:07.340593   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:07.343839   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:07.359189   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:07.462171   19225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 21:12:07.536333   19225 addons.go:231] Setting addon gcp-auth=true in "addons-276457"
	I1025 21:12:07.536475   19225 host.go:66] Checking if "addons-276457" exists ...
	I1025 21:12:07.536998   19225 cli_runner.go:164] Run: docker container inspect addons-276457 --format={{.State.Status}}
	I1025 21:12:07.558237   19225 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 21:12:07.558298   19225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-276457
	I1025 21:12:07.574134   19225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/addons-276457/id_rsa Username:docker}
	I1025 21:12:07.635647   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:07.635743   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:07.849138   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:08.135503   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:08.136265   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:08.348558   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:08.643043   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:08.644063   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:08.848897   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:08.950776   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.155222699s)
	I1025 21:12:08.950849   19225 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.392581165s)
	I1025 21:12:08.953078   19225 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1025 21:12:08.954883   19225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 21:12:08.956500   19225 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 21:12:08.956517   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 21:12:09.038689   19225 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 21:12:09.038756   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 21:12:09.126917   19225 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 21:12:09.126943   19225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1025 21:12:09.133985   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:09.134839   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:09.146321   19225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 21:12:09.156076   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:09.349034   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:09.636106   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:09.637148   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:09.849302   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:10.134581   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:10.134790   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:10.349113   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:10.635781   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:10.636129   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:10.734659   19225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.588298206s)
	I1025 21:12:10.735581   19225 addons.go:467] Verifying addon gcp-auth=true in "addons-276457"
	I1025 21:12:10.737415   19225 out.go:177] * Verifying gcp-auth addon...
	I1025 21:12:10.739887   19225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 21:12:10.742617   19225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 21:12:10.742641   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:10.746634   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:10.848472   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:11.134214   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:11.134439   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:11.250014   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:11.349308   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:11.633104   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:11.633372   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:11.654357   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:11.750480   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:11.847991   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:12.134116   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:12.134718   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:12.250547   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:12.347886   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:12.634011   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:12.634022   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:12.750390   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:12.848027   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:13.134037   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:13.135120   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:13.249836   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:13.348338   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:13.633424   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:13.633569   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:13.654530   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:13.750062   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:13.848573   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:14.133495   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:14.133734   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:14.250067   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:14.347553   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:14.633526   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:14.633911   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:14.749905   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:14.848159   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:15.133262   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:15.133613   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:15.249823   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:15.348321   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:15.633403   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:15.633497   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:15.749779   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:15.847519   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:16.133833   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:16.134029   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:16.155002   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:16.249601   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:16.348006   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:16.633201   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:16.633259   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:16.749963   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:16.848115   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:17.133480   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:17.133721   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:17.249999   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:17.348522   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:17.633962   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:17.634577   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:17.749563   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:17.847841   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:18.133895   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:18.134120   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:18.249430   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:18.347867   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:18.633578   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:18.633682   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:18.654849   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:18.750350   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:18.847622   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:19.133914   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:19.134098   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:19.249857   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:19.348229   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:19.633255   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:19.633484   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:19.749661   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:19.847690   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:20.133187   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:20.133336   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:20.250493   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:20.347640   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:20.634037   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:20.634542   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:20.749285   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:20.847646   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:21.134352   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:21.134383   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:21.154674   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:21.250185   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:21.347526   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:21.633300   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:21.633436   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:21.750084   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:21.847267   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:22.133632   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:22.133720   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:22.249521   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:22.348503   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:22.633486   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:22.633565   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:22.750129   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:22.847255   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:23.133307   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:23.133853   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:23.250161   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:23.347758   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:23.633860   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:23.633911   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:23.655242   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:23.749703   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:23.848058   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:24.133318   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:24.133604   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:24.249936   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:24.348051   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:24.632996   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:24.633416   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:24.749833   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:24.848028   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:25.133303   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:25.133529   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:25.249937   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:25.348354   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:25.633186   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:25.633240   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:25.749663   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:25.848596   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:26.133782   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:26.134110   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:26.155001   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:26.249392   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:26.347872   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:26.633678   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:26.633841   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:26.750213   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:26.847257   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:27.133319   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:27.133423   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:27.249993   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:27.348705   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:27.633509   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:27.633724   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:27.750096   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:27.848561   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:28.133628   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:28.133897   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:28.250361   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:28.347652   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:28.633382   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:28.633588   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:28.654595   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:28.750115   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:28.847408   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:29.134037   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:29.134371   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:29.250029   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:29.348451   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:29.633813   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:29.634024   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:29.749236   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:29.850405   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:30.133779   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:30.133920   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:30.249656   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:30.348369   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:30.633305   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:30.633481   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:30.749818   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:30.848216   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:31.133198   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:31.133537   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:31.154250   19225 node_ready.go:58] node "addons-276457" has status "Ready":"False"
	I1025 21:12:31.249863   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:31.348328   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:31.633382   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:31.633517   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:31.750016   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:31.848287   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:32.133307   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:32.133500   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:32.250000   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:32.348391   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:32.633229   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:32.633409   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:32.750150   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:32.847378   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:33.136568   19225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 21:12:33.136595   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:33.138842   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:33.155222   19225 node_ready.go:49] node "addons-276457" has status "Ready":"True"
	I1025 21:12:33.155260   19225 node_ready.go:38] duration metric: took 32.526557864s waiting for node "addons-276457" to be "Ready" ...
	I1025 21:12:33.155273   19225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:12:33.165439   19225 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sf5h2" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:33.250601   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:33.351440   19225 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 21:12:33.351468   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:33.633866   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:33.634046   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:33.749793   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:33.849995   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:34.134141   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:34.134192   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:34.249567   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:34.348955   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:34.633634   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:34.633743   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:34.681949   19225 pod_ready.go:92] pod "coredns-5dd5756b68-sf5h2" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:34.681977   19225 pod_ready.go:81] duration metric: took 1.516502695s waiting for pod "coredns-5dd5756b68-sf5h2" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.681995   19225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.686497   19225 pod_ready.go:92] pod "etcd-addons-276457" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:34.686561   19225 pod_ready.go:81] duration metric: took 4.558807ms waiting for pod "etcd-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.686578   19225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.690861   19225 pod_ready.go:92] pod "kube-apiserver-addons-276457" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:34.690878   19225 pod_ready.go:81] duration metric: took 4.293041ms waiting for pod "kube-apiserver-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.690887   19225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.750234   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:34.755054   19225 pod_ready.go:92] pod "kube-controller-manager-addons-276457" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:34.755073   19225 pod_ready.go:81] duration metric: took 64.179742ms waiting for pod "kube-controller-manager-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.755084   19225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lfxtf" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:34.849182   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:35.133762   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:35.133880   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:35.155334   19225 pod_ready.go:92] pod "kube-proxy-lfxtf" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:35.155355   19225 pod_ready.go:81] duration metric: took 400.266104ms waiting for pod "kube-proxy-lfxtf" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:35.155363   19225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:35.250173   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:35.348328   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:35.555607   19225 pod_ready.go:92] pod "kube-scheduler-addons-276457" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:35.555629   19225 pod_ready.go:81] duration metric: took 400.259869ms waiting for pod "kube-scheduler-addons-276457" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:35.555638   19225 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:35.633867   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:35.634021   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:35.749463   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:35.848519   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:36.136054   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:36.136147   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:36.251745   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:36.350024   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:36.640651   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:36.641581   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:36.750219   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:36.850325   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:37.136086   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:37.136405   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:37.249808   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:37.349930   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:37.634741   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:37.636183   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:37.750147   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:37.849140   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:37.935890   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:38.135034   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:38.135446   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:38.250182   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:38.349542   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:38.634565   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:38.635108   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:38.751230   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:38.850001   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:39.134954   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:39.135602   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:39.250112   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:39.349854   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:39.634350   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:39.634398   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:39.749750   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:39.849451   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:40.136552   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:40.136798   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:40.250063   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:40.349840   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:40.434911   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:40.633852   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:40.634010   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:40.750557   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:40.849042   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:41.134037   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:41.134109   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:41.249623   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:41.350272   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:41.633973   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:41.634654   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:41.750924   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:41.849171   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:42.134533   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:42.134676   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:42.250431   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:42.348429   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:42.634154   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:42.634920   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:42.750072   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:42.849672   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:42.936003   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:43.134915   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:43.135288   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:43.250865   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:43.350031   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:43.647860   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:43.648426   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:43.750902   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:43.850167   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:44.135522   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:44.135812   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:44.250556   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:44.349860   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:44.634416   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:44.634515   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:44.750361   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:44.848913   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:45.136647   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:45.136709   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:45.250866   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:45.349404   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:45.434200   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:45.633680   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:45.633741   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:45.750468   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:45.849268   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:46.133983   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:46.133992   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:46.249565   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:46.349080   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:46.634276   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:46.634298   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:46.749980   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:46.850355   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:47.134834   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:47.135031   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:47.250194   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:47.349470   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:47.436254   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:47.637208   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:47.637248   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:47.749839   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:47.849034   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:48.134521   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:48.134646   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:48.249987   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:48.349741   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:48.634974   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:48.635642   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:48.750484   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:48.848832   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:49.133858   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:49.134330   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:49.250267   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:49.350544   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:49.634274   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:49.634446   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:49.750191   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:49.849645   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:49.936272   19225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"False"
	I1025 21:12:50.134475   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:50.134601   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:50.249941   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:50.350002   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:50.634471   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:50.634696   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:50.750295   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:50.848376   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:50.934411   19225 pod_ready.go:92] pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:50.934431   19225 pod_ready.go:81] duration metric: took 15.378787487s waiting for pod "metrics-server-7c66d45ddc-npx6l" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:50.934440   19225 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6vcl4" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:50.938535   19225 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-6vcl4" in "kube-system" namespace has status "Ready":"True"
	I1025 21:12:50.938553   19225 pod_ready.go:81] duration metric: took 4.107301ms waiting for pod "nvidia-device-plugin-daemonset-6vcl4" in "kube-system" namespace to be "Ready" ...
	I1025 21:12:50.938571   19225 pod_ready.go:38] duration metric: took 17.783282137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:12:50.938590   19225 api_server.go:52] waiting for apiserver process to appear ...
	I1025 21:12:50.938641   19225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:12:50.951019   19225 api_server.go:72] duration metric: took 50.485254845s to wait for apiserver process to appear ...
	I1025 21:12:50.951054   19225 api_server.go:88] waiting for apiserver healthz status ...
	I1025 21:12:50.951076   19225 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 21:12:50.955022   19225 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 21:12:50.955983   19225 api_server.go:141] control plane version: v1.28.3
	I1025 21:12:50.956003   19225 api_server.go:131] duration metric: took 4.943529ms to wait for apiserver health ...
	I1025 21:12:50.956011   19225 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 21:12:50.963925   19225 system_pods.go:59] 19 kube-system pods found
	I1025 21:12:50.963956   19225 system_pods.go:61] "coredns-5dd5756b68-sf5h2" [751ca8b7-0f96-4283-985e-466a5465488b] Running
	I1025 21:12:50.963961   19225 system_pods.go:61] "csi-hostpath-attacher-0" [db70516b-fb4f-4675-809f-c13a75b3520b] Running
	I1025 21:12:50.963970   19225 system_pods.go:61] "csi-hostpath-resizer-0" [4209b996-75ca-4014-8e18-94ac7624feb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 21:12:50.963979   19225 system_pods.go:61] "csi-hostpathplugin-lpvws" [dcd7bf3c-50b6-4316-af65-6502373843a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 21:12:50.963987   19225 system_pods.go:61] "etcd-addons-276457" [5b2496f0-24e4-4b6f-96a2-178f23708977] Running
	I1025 21:12:50.963992   19225 system_pods.go:61] "kindnet-gwvhf" [e43f73bf-ff00-4e2a-b7fd-04f1ea6e7525] Running
	I1025 21:12:50.964065   19225 system_pods.go:61] "kube-apiserver-addons-276457" [65afa9c8-ca8e-4c44-a32f-1e309066d3ba] Running
	I1025 21:12:50.964092   19225 system_pods.go:61] "kube-controller-manager-addons-276457" [072d240d-befd-44a9-a611-03a71d6b942d] Running
	I1025 21:12:50.964106   19225 system_pods.go:61] "kube-ingress-dns-minikube" [b61b20cf-d8fa-4d4d-bcba-1a241bd163c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 21:12:50.964115   19225 system_pods.go:61] "kube-proxy-lfxtf" [5788d071-c70e-423a-b0b3-f6a073dd9ac7] Running
	I1025 21:12:50.964123   19225 system_pods.go:61] "kube-scheduler-addons-276457" [47f839ab-49d7-48d7-956f-aa6420977e23] Running
	I1025 21:12:50.964128   19225 system_pods.go:61] "metrics-server-7c66d45ddc-npx6l" [2269dbab-85e9-49c1-a14c-dc3b4c9b6219] Running
	I1025 21:12:50.964134   19225 system_pods.go:61] "nvidia-device-plugin-daemonset-6vcl4" [a592e92f-1bee-4d45-b641-bcd64d215d00] Running
	I1025 21:12:50.964140   19225 system_pods.go:61] "registry-proxy-757b5" [0f632bc0-5dac-4262-9ef7-eefd90d3e1e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 21:12:50.964148   19225 system_pods.go:61] "registry-wzfbd" [2736623a-ce10-4cd0-9c1b-72b47c11791c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 21:12:50.964159   19225 system_pods.go:61] "snapshot-controller-58dbcc7b99-8w96w" [1a277b0b-61fc-4fd0-a8a8-9c0b6cf9a142] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:12:50.964168   19225 system_pods.go:61] "snapshot-controller-58dbcc7b99-z65gj" [28b16b26-5e12-461c-98a4-399698e38c7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:12:50.964174   19225 system_pods.go:61] "storage-provisioner" [828646cd-20b3-4a1c-a61e-3d317b516b4a] Running
	I1025 21:12:50.964182   19225 system_pods.go:61] "tiller-deploy-7b677967b9-n7rpr" [136d2d8d-36a3-4072-9f39-dc7708f0c429] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1025 21:12:50.964191   19225 system_pods.go:74] duration metric: took 8.174159ms to wait for pod list to return data ...
	I1025 21:12:50.964200   19225 default_sa.go:34] waiting for default service account to be created ...
	I1025 21:12:50.966207   19225 default_sa.go:45] found service account: "default"
	I1025 21:12:50.966227   19225 default_sa.go:55] duration metric: took 2.020586ms for default service account to be created ...
	I1025 21:12:50.966235   19225 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 21:12:50.973622   19225 system_pods.go:86] 19 kube-system pods found
	I1025 21:12:50.973648   19225 system_pods.go:89] "coredns-5dd5756b68-sf5h2" [751ca8b7-0f96-4283-985e-466a5465488b] Running
	I1025 21:12:50.973654   19225 system_pods.go:89] "csi-hostpath-attacher-0" [db70516b-fb4f-4675-809f-c13a75b3520b] Running
	I1025 21:12:50.973662   19225 system_pods.go:89] "csi-hostpath-resizer-0" [4209b996-75ca-4014-8e18-94ac7624feb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 21:12:50.973672   19225 system_pods.go:89] "csi-hostpathplugin-lpvws" [dcd7bf3c-50b6-4316-af65-6502373843a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 21:12:50.973677   19225 system_pods.go:89] "etcd-addons-276457" [5b2496f0-24e4-4b6f-96a2-178f23708977] Running
	I1025 21:12:50.973681   19225 system_pods.go:89] "kindnet-gwvhf" [e43f73bf-ff00-4e2a-b7fd-04f1ea6e7525] Running
	I1025 21:12:50.973686   19225 system_pods.go:89] "kube-apiserver-addons-276457" [65afa9c8-ca8e-4c44-a32f-1e309066d3ba] Running
	I1025 21:12:50.973691   19225 system_pods.go:89] "kube-controller-manager-addons-276457" [072d240d-befd-44a9-a611-03a71d6b942d] Running
	I1025 21:12:50.973697   19225 system_pods.go:89] "kube-ingress-dns-minikube" [b61b20cf-d8fa-4d4d-bcba-1a241bd163c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 21:12:50.973704   19225 system_pods.go:89] "kube-proxy-lfxtf" [5788d071-c70e-423a-b0b3-f6a073dd9ac7] Running
	I1025 21:12:50.973709   19225 system_pods.go:89] "kube-scheduler-addons-276457" [47f839ab-49d7-48d7-956f-aa6420977e23] Running
	I1025 21:12:50.973713   19225 system_pods.go:89] "metrics-server-7c66d45ddc-npx6l" [2269dbab-85e9-49c1-a14c-dc3b4c9b6219] Running
	I1025 21:12:50.973718   19225 system_pods.go:89] "nvidia-device-plugin-daemonset-6vcl4" [a592e92f-1bee-4d45-b641-bcd64d215d00] Running
	I1025 21:12:50.973723   19225 system_pods.go:89] "registry-proxy-757b5" [0f632bc0-5dac-4262-9ef7-eefd90d3e1e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 21:12:50.973735   19225 system_pods.go:89] "registry-wzfbd" [2736623a-ce10-4cd0-9c1b-72b47c11791c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 21:12:50.973742   19225 system_pods.go:89] "snapshot-controller-58dbcc7b99-8w96w" [1a277b0b-61fc-4fd0-a8a8-9c0b6cf9a142] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:12:50.973749   19225 system_pods.go:89] "snapshot-controller-58dbcc7b99-z65gj" [28b16b26-5e12-461c-98a4-399698e38c7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:12:50.973756   19225 system_pods.go:89] "storage-provisioner" [828646cd-20b3-4a1c-a61e-3d317b516b4a] Running
	I1025 21:12:50.973762   19225 system_pods.go:89] "tiller-deploy-7b677967b9-n7rpr" [136d2d8d-36a3-4072-9f39-dc7708f0c429] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1025 21:12:50.973770   19225 system_pods.go:126] duration metric: took 7.531067ms to wait for k8s-apps to be running ...
	I1025 21:12:50.973779   19225 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 21:12:50.973821   19225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:12:50.984625   19225 system_svc.go:56] duration metric: took 10.835646ms WaitForService to wait for kubelet.
	I1025 21:12:50.984651   19225 kubeadm.go:581] duration metric: took 50.518893947s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 21:12:50.984678   19225 node_conditions.go:102] verifying NodePressure condition ...
	I1025 21:12:50.987538   19225 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 21:12:50.987573   19225 node_conditions.go:123] node cpu capacity is 8
	I1025 21:12:50.987587   19225 node_conditions.go:105] duration metric: took 2.90305ms to run NodePressure ...
	I1025 21:12:50.987601   19225 start.go:228] waiting for startup goroutines ...
	I1025 21:12:51.134348   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:51.134420   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:51.250578   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:51.350825   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:51.641527   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:51.643600   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:51.827184   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:51.850074   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:52.135372   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:52.135598   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:52.250611   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:52.350219   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:52.634793   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:52.634880   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:52.750682   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:52.849285   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:53.135535   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:53.135761   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:53.251151   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:53.350970   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:53.633772   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:53.634095   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:53.750044   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:53.849575   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:54.134227   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:54.134259   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:54.250697   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:54.349954   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:54.634116   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:54.634800   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:54.750731   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:54.849226   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:55.134382   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:55.134821   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:55.250702   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:55.349610   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:55.635099   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:55.635892   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:55.752105   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:55.849910   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:56.135193   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:56.136578   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:56.249515   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:56.348805   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:56.636163   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:56.636171   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:56.750372   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:56.858577   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:57.133846   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:57.133901   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:57.250624   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:57.349825   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:57.634054   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:57.634268   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:57.750356   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:57.849785   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:58.134210   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:58.134312   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:58.249762   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:58.349390   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:58.634053   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:58.634638   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:58.749932   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:58.849164   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:59.134264   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:59.134714   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:59.250509   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:59.349040   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:12:59.633877   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:12:59.634052   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:12:59.750263   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:12:59.848109   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:00.136330   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:00.136400   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:00.250205   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:00.349037   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:00.634240   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:00.634940   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:00.750658   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:00.849402   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:01.134929   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:01.135128   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:01.250796   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:01.349810   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:01.634932   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:01.635428   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:01.750764   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:01.848618   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:02.134252   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:02.134462   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:02.249786   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:02.349987   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:02.633929   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:02.634188   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:02.750947   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:02.849460   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:03.134994   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:03.136581   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:03.250332   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:03.348732   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:03.633548   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:03.633583   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:13:03.750165   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:03.848952   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:04.134429   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:04.134460   19225 kapi.go:107] duration metric: took 57.578005009s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 21:13:04.250425   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:04.348711   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:04.634488   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:04.749992   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:04.849381   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:05.135146   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:05.250850   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:05.350203   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:05.636054   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:05.753082   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:05.850702   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:06.135559   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:06.251082   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:06.349660   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:06.635199   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:06.750332   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:06.849749   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:07.134385   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:07.250749   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:07.349663   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:07.634142   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:07.750682   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:07.849129   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:08.134030   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:08.250212   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:08.349372   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:08.634080   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:08.750203   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:08.850240   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:09.134241   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:09.251748   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:09.349109   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:09.633582   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:09.749839   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:09.849610   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:10.133671   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:10.317615   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:10.356598   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:10.635467   19225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:13:10.750853   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:10.849738   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:11.134697   19225 kapi.go:107] duration metric: took 1m4.575650465s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 21:13:11.249992   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:11.349681   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:11.749744   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:11.853457   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:12.250084   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:12.349725   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:12.750151   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:13:12.849190   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:13.250791   19225 kapi.go:107] duration metric: took 1m2.510904458s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 21:13:13.253165   19225 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-276457 cluster.
	I1025 21:13:13.254819   19225 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 21:13:13.256443   19225 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 21:13:13.349691   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:13.848395   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:14.349466   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:14.848436   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:15.349192   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:15.862237   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:16.353858   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:16.850253   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:17.349194   19225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:13:17.849011   19225 kapi.go:107] duration metric: took 1m10.511772782s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 21:13:17.851069   19225 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner-rancher, metrics-server, helm-tiller, storage-provisioner, nvidia-device-plugin, inspektor-gadget, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1025 21:13:17.852604   19225 addons.go:502] enable addons completed in 1m17.456879709s: enabled=[cloud-spanner ingress-dns storage-provisioner-rancher metrics-server helm-tiller storage-provisioner nvidia-device-plugin inspektor-gadget default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1025 21:13:17.852652   19225 start.go:233] waiting for cluster config update ...
	I1025 21:13:17.852669   19225 start.go:242] writing updated cluster config ...
	I1025 21:13:17.852907   19225 ssh_runner.go:195] Run: rm -f paused
	I1025 21:13:17.899399   19225 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1025 21:13:17.901088   19225 out.go:177] * Done! kubectl is now configured to use "addons-276457" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.294872566Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,RepoTags:[docker.io/library/busybox:stable],RepoDigests:[docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79],Size_:4497096,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=75de86ef-d603-4920-b930-be4fe6e992ed name=/runtime.v1.ImageService/ImageStatus
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.295503165Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=1bf0d3c3-d330-465e-9a9a-e66276156997 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.296273784Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,RepoTags:[docker.io/library/busybox:stable],RepoDigests:[docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79],Size_:4497096,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1bf0d3c3-d330-465e-9a9a-e66276156997 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.297147921Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-b62d5b0e-4bb9-43b8-94d3-0062132da2ef/helper-pod" id=96dbb0bc-8106-4c07-a9fb-7bfb54620c1d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.297235943Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.327258201Z" level=info msg="Stopped pod sandbox: 67d0db41ff8ce076a650f1f6133dccc52d5be5622dbc61ba46d03dcf2de7a2be" id=98e7afaa-60fc-400e-b984-7a3f65f649f0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.366641332Z" level=info msg="Created container d8768ebbf094ac3f285471fff40f59cd76fa14f278d43f90f4e0b2f95eee7116: local-path-storage/helper-pod-delete-pvc-b62d5b0e-4bb9-43b8-94d3-0062132da2ef/helper-pod" id=96dbb0bc-8106-4c07-a9fb-7bfb54620c1d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.367119749Z" level=info msg="Starting container: d8768ebbf094ac3f285471fff40f59cd76fa14f278d43f90f4e0b2f95eee7116" id=d5a86576-8f09-4d3b-9364-6f2eb52cc77e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.375742812Z" level=info msg="Started container" PID=8270 containerID=d8768ebbf094ac3f285471fff40f59cd76fa14f278d43f90f4e0b2f95eee7116 description=local-path-storage/helper-pod-delete-pvc-b62d5b0e-4bb9-43b8-94d3-0062132da2ef/helper-pod id=d5a86576-8f09-4d3b-9364-6f2eb52cc77e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f91629c28162590356bff8c0bac5978e25fa57f334537facec33e7fb8aeba55c
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.503513117Z" level=info msg="Stopping container: a0e943dd65d653f941ebac85d844f4763c97e94836406e641e257f0a9228e5c6 (timeout: 30s)" id=10768c52-b471-4fd3-95f3-7611d986b15d name=/runtime.v1.RuntimeService/StopContainer
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.947689046Z" level=info msg="Removing container: eea01fdc38abebac818f59a2078254620d56c9546fc3017d762e3377eaf2ad04" id=e55f0b1d-df9a-488f-a7e3-15aa9ac8d3fa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 21:13:37 addons-276457 crio[947]: time="2023-10-25 21:13:37.965871483Z" level=info msg="Removed container eea01fdc38abebac818f59a2078254620d56c9546fc3017d762e3377eaf2ad04: kube-system/nvidia-device-plugin-daemonset-6vcl4/nvidia-device-plugin-ctr" id=e55f0b1d-df9a-488f-a7e3-15aa9ac8d3fa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 21:13:38 addons-276457 crio[947]: time="2023-10-25 21:13:38.949698072Z" level=info msg="Stopping pod sandbox: f91629c28162590356bff8c0bac5978e25fa57f334537facec33e7fb8aeba55c" id=d14a1be0-65f2-40f1-b336-e42da4435fc1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 21:13:38 addons-276457 crio[947]: time="2023-10-25 21:13:38.949994310Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-b62d5b0e-4bb9-43b8-94d3-0062132da2ef Namespace:local-path-storage ID:f91629c28162590356bff8c0bac5978e25fa57f334537facec33e7fb8aeba55c UID:483c2b37-35f0-44e5-aa85-e7b748f0d4f0 NetNS:/var/run/netns/1559249f-bf15-4aab-8978-e391e3cf2c27 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 25 21:13:38 addons-276457 crio[947]: time="2023-10-25 21:13:38.950160648Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-b62d5b0e-4bb9-43b8-94d3-0062132da2ef from CNI network \"kindnet\" (type=ptp)"
	Oct 25 21:13:38 addons-276457 crio[947]: time="2023-10-25 21:13:38.972752011Z" level=info msg="Stopped pod sandbox: f91629c28162590356bff8c0bac5978e25fa57f334537facec33e7fb8aeba55c" id=d14a1be0-65f2-40f1-b336-e42da4435fc1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 21:13:42 addons-276457 crio[947]: time="2023-10-25 21:13:42.646329680Z" level=info msg="Stopping container: e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0 (timeout: 30s)" id=2f0cdf0b-c48c-4650-b1d2-3a4e0cd148e4 name=/runtime.v1.RuntimeService/StopContainer
	Oct 25 21:13:42 addons-276457 conmon[4001]: conmon e33347aa0c17d61e30ef <ninfo>: container 4013 exited with status 2
	Oct 25 21:13:42 addons-276457 crio[947]: time="2023-10-25 21:13:42.801035663Z" level=info msg="Stopped container e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0: default/cloud-spanner-emulator-56665cdfc-5h6wg/cloud-spanner-emulator" id=2f0cdf0b-c48c-4650-b1d2-3a4e0cd148e4 name=/runtime.v1.RuntimeService/StopContainer
	Oct 25 21:13:42 addons-276457 crio[947]: time="2023-10-25 21:13:42.801596997Z" level=info msg="Stopping pod sandbox: 1998017b387eda8c00e7bdff458f3f139df899d70fad1947bf3aeb8256309279" id=88776141-e937-4bbb-8cab-30e96242a5cd name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 21:13:42 addons-276457 crio[947]: time="2023-10-25 21:13:42.801799786Z" level=info msg="Got pod network &{Name:cloud-spanner-emulator-56665cdfc-5h6wg Namespace:default ID:1998017b387eda8c00e7bdff458f3f139df899d70fad1947bf3aeb8256309279 UID:9a49db78-b481-4deb-ab50-228a4e85728c NetNS:/var/run/netns/64708221-fb6a-435a-9cea-4dbae3f43483 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 25 21:13:42 addons-276457 crio[947]: time="2023-10-25 21:13:42.801923969Z" level=info msg="Deleting pod default_cloud-spanner-emulator-56665cdfc-5h6wg from CNI network \"kindnet\" (type=ptp)"
	Oct 25 21:13:42 addons-276457 crio[947]: time="2023-10-25 21:13:42.839568250Z" level=info msg="Stopped pod sandbox: 1998017b387eda8c00e7bdff458f3f139df899d70fad1947bf3aeb8256309279" id=88776141-e937-4bbb-8cab-30e96242a5cd name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 21:13:42 addons-276457 crio[947]: time="2023-10-25 21:13:42.962741991Z" level=info msg="Removing container: e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0" id=52a02984-c8fc-443e-ae7a-443a00d3a824 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 21:13:42 addons-276457 crio[947]: time="2023-10-25 21:13:42.978530756Z" level=info msg="Removed container e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0: default/cloud-spanner-emulator-56665cdfc-5h6wg/cloud-spanner-emulator" id=52a02984-c8fc-443e-ae7a-443a00d3a824 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	d8768ebbf094a       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             6 seconds ago        Exited              helper-pod                               0                   f91629c281625       helper-pod-delete-pvc-b62d5b0e-4bb9-43b8-94d3-0062132da2ef
	eae44a5ba921c       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            9 seconds ago        Exited              busybox                                  0                   4b509f342f3b4       test-local-path
	8876bc82bf0e0       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                                              9 seconds ago        Running             nginx                                    0                   1c2e0300f65af       nginx
	d426cb3a3e7e4       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            13 seconds ago       Exited              helper-pod                               0                   0fc5fc926c81f       helper-pod-create-pvc-b62d5b0e-4bb9-43b8-94d3-0062132da2ef
	5c07f2bb09f2c       docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                19 seconds ago       Exited              helm-test                                0                   585ffc8c0b4e6       helm-test
	8aa314d9326c7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          26 seconds ago       Running             csi-snapshotter                          0                   8aecbe69c55d2       csi-hostpathplugin-lpvws
	5c5a5cd75482b       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          27 seconds ago       Running             csi-provisioner                          0                   8aecbe69c55d2       csi-hostpathplugin-lpvws
	c85826666b363       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            29 seconds ago       Running             liveness-probe                           0                   8aecbe69c55d2       csi-hostpathplugin-lpvws
	95e9254a51e5a       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           30 seconds ago       Running             hostpath                                 0                   8aecbe69c55d2       csi-hostpathplugin-lpvws
	7cba49f7404b6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                                 31 seconds ago       Running             gcp-auth                                 0                   1c9817f50ebdf       gcp-auth-d4c87556c-5hmwp
	d61a09942e5d9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                32 seconds ago       Running             node-driver-registrar                    0                   8aecbe69c55d2       csi-hostpathplugin-lpvws
	f802f74ec158f       registry.k8s.io/ingress-nginx/controller@sha256:2648554ee53ec65a6095e00a53c89efae60aa21086733cdf56ae05e8f8546788                             33 seconds ago       Running             controller                               0                   ea080af89b41b       ingress-nginx-controller-6f48fc54bd-gcj4s
	88022f9ec005f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   39 seconds ago       Running             csi-external-health-monitor-controller   0                   8aecbe69c55d2       csi-hostpathplugin-lpvws
	cae7a123866f8       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      40 seconds ago       Running             volume-snapshot-controller               0                   c25c4f6b059ff       snapshot-controller-58dbcc7b99-z65gj
	971b8372d8521       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                                             40 seconds ago       Exited              patch                                    2                   2e72454d44ae2       gcp-auth-certs-patch-pkt9q
	f3ba462392ca9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      44 seconds ago       Running             volume-snapshot-controller               0                   852e5f7752e86       snapshot-controller-58dbcc7b99-8w96w
	f20e415c99ea2       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                                             46 seconds ago       Exited              patch                                    2                   7f83cd33faf88       ingress-nginx-admission-patch-c27zh
	f4366c1b24f54       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   48 seconds ago       Exited              create                                   0                   8f6d2180a0d2b       gcp-auth-certs-create-qnrkh
	fc4d5bed77ab9       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             48 seconds ago       Running             minikube-ingress-dns                     0                   78bf8bae87e2d       kube-ingress-dns-minikube
	1e92b33ba5432       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              53 seconds ago       Running             csi-resizer                              0                   df59e0d4de6bd       csi-hostpath-resizer-0
	f44a7896bf359       registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21                        54 seconds ago       Running             metrics-server                           0                   c6c68615edeb8       metrics-server-7c66d45ddc-npx6l
	806b39c75be96       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             56 seconds ago       Running             csi-attacher                             0                   ceca2e7a43f23       csi-hostpath-attacher-0
	2452485fa2fc8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   57 seconds ago       Exited              create                                   0                   7d1fe410a46b0       ingress-nginx-admission-create-dp2tl
	a0e943dd65d65       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             57 seconds ago       Running             local-path-provisioner                   0                   75340299c3584       local-path-provisioner-78b46b4d5c-fcdjl
	75ff7ad1772cd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago   Running             coredns                                  0                   096b2a366d270       coredns-5dd5756b68-sf5h2
	992887bfba631       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   d90eb0b2bfef0       storage-provisioner
	ce84af5567968       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                                             About a minute ago   Running             kube-proxy                               0                   087f8ba5f918a       kube-proxy-lfxtf
	5f7b376083c55       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                                             About a minute ago   Running             kindnet-cni                              0                   273acfe28a6d2       kindnet-gwvhf
	03b775e92e4fb       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                                             2 minutes ago        Running             kube-scheduler                           0                   47de2bfcdfc83       kube-scheduler-addons-276457
	7b23088978803       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago        Running             etcd                                     0                   7a0aa173b6609       etcd-addons-276457
	7bd55dfbf9f63       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                                             2 minutes ago        Running             kube-apiserver                           0                   424fae2d2fca5       kube-apiserver-addons-276457
	375b113702a15       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                                             2 minutes ago        Running             kube-controller-manager                  0                   fe8d7772b58ac       kube-controller-manager-addons-276457
	
	* 
	* ==> coredns [75ff7ad1772cd4cc641c27d81dc0ba3f5ae883af114601ed60edd0fe5e91f539] <==
	* [INFO] 10.244.0.16:35861 - 50476 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110623s
	[INFO] 10.244.0.16:41819 - 14105 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005211734s
	[INFO] 10.244.0.16:41819 - 9734 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005304659s
	[INFO] 10.244.0.16:43957 - 59882 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003654702s
	[INFO] 10.244.0.16:43957 - 44015 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004749289s
	[INFO] 10.244.0.16:44540 - 61998 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005153616s
	[INFO] 10.244.0.16:44540 - 63019 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005239373s
	[INFO] 10.244.0.16:41996 - 36278 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000060207s
	[INFO] 10.244.0.16:41996 - 24754 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098313s
	[INFO] 10.244.0.20:39156 - 35120 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000171297s
	[INFO] 10.244.0.20:40390 - 17221 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000150397s
	[INFO] 10.244.0.20:40135 - 52991 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144489s
	[INFO] 10.244.0.20:52295 - 8782 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155773s
	[INFO] 10.244.0.20:56044 - 57441 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096856s
	[INFO] 10.244.0.20:49017 - 23362 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117587s
	[INFO] 10.244.0.20:44022 - 8593 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006916256s
	[INFO] 10.244.0.20:48756 - 35047 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007296825s
	[INFO] 10.244.0.20:45314 - 57066 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006592822s
	[INFO] 10.244.0.20:45101 - 27190 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007298437s
	[INFO] 10.244.0.20:49742 - 1250 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00432022s
	[INFO] 10.244.0.20:56914 - 26333 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004911244s
	[INFO] 10.244.0.20:52891 - 56091 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000660891s
	[INFO] 10.244.0.20:56358 - 7017 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000656753s
	[INFO] 10.244.0.22:59087 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000109206s
	[INFO] 10.244.0.22:48971 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000070826s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-276457
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-276457
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=addons-276457
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T21_11_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-276457
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-276457"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:11:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-276457
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 21:13:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:13:19 +0000   Wed, 25 Oct 2023 21:11:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:13:19 +0000   Wed, 25 Oct 2023 21:11:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:13:19 +0000   Wed, 25 Oct 2023 21:11:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 21:13:19 +0000   Wed, 25 Oct 2023 21:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-276457
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c2fc0dc393648b7a68521daff511eb3
	  System UUID:                0971f13f-3e61-4c7b-bfb8-1801c7f8cab3
	  Boot ID:                    34092eb3-c5c2-47c9-ae8c-38e7a764813a
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  gcp-auth                    gcp-auth-d4c87556c-5hmwp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  ingress-nginx               ingress-nginx-controller-6f48fc54bd-gcj4s    100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         98s
	  kube-system                 coredns-5dd5756b68-sf5h2                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     104s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 csi-hostpathplugin-lpvws                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 etcd-addons-276457                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kindnet-gwvhf                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      104s
	  kube-system                 kube-apiserver-addons-276457                 250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-addons-276457        200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-lfxtf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-addons-276457                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 metrics-server-7c66d45ddc-npx6l              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         99s
	  kube-system                 snapshot-controller-58dbcc7b99-8w96w         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 snapshot-controller-58dbcc7b99-z65gj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  local-path-storage          local-path-provisioner-78b46b4d5c-fcdjl      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%!)(MISSING)  100m (1%!)(MISSING)
	  memory             510Mi (1%!)(MISSING)   220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 99s   kube-proxy       
	  Normal  Starting                 117s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s  kubelet          Node addons-276457 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s  kubelet          Node addons-276457 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s  kubelet          Node addons-276457 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           105s  node-controller  Node addons-276457 event: Registered Node addons-276457 in Controller
	  Normal  NodeReady                71s   kubelet          Node addons-276457 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001544]  #3
	[  +0.000000]  #4
	[  +0.003186] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003170] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001947] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002190]  #5
	[  +0.000692]  #6
	[  +0.000828]  #7
	[  +0.058348] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.622326] i8042: Warning: Keylock active
	[  +0.007686] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003094] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000681] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000644] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000681] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000619] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000652] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.894003] kauditd_printk_skb: 36 callbacks suppressed
	[Oct25 21:13] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 89 e0 fc dd 0b 4e 55 b8 5a de 54 08 00
	[  +1.015767] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 89 e0 fc dd 0b 4e 55 b8 5a de 54 08 00
	
	* 
	* ==> etcd [7b230889788035517ae74d3b3c3ee09dd49a3de32493db3e17276fbbc8f68a57] <==
	* {"level":"warn","ts":"2023-10-25T21:12:03.832116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.897377ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-276457\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-10-25T21:12:03.832152Z","caller":"traceutil/trace.go:171","msg":"trace[1538852331] range","detail":"{range_begin:/registry/minions/addons-276457; range_end:; response_count:1; response_revision:388; }","duration":"103.944379ms","start":"2023-10-25T21:12:03.728199Z","end":"2023-10-25T21:12:03.832143Z","steps":["trace[1538852331] 'agreement among raft nodes before linearized reading'  (duration: 103.848859ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:03.83232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.158376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-276457\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-10-25T21:12:03.832352Z","caller":"traceutil/trace.go:171","msg":"trace[1603167304] range","detail":"{range_begin:/registry/minions/addons-276457; range_end:; response_count:1; response_revision:388; }","duration":"104.193129ms","start":"2023-10-25T21:12:03.728152Z","end":"2023-10-25T21:12:03.832345Z","steps":["trace[1603167304] 'agreement among raft nodes before linearized reading'  (duration: 104.139351ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:03.832504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.623935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-25T21:12:03.832537Z","caller":"traceutil/trace.go:171","msg":"trace[1069362955] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:388; }","duration":"299.65618ms","start":"2023-10-25T21:12:03.532871Z","end":"2023-10-25T21:12:03.832528Z","steps":["trace[1069362955] 'agreement among raft nodes before linearized reading'  (duration: 299.609163ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:03.832659Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.909526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2023-10-25T21:12:03.832688Z","caller":"traceutil/trace.go:171","msg":"trace[779930748] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:388; }","duration":"299.937779ms","start":"2023-10-25T21:12:03.532743Z","end":"2023-10-25T21:12:03.832681Z","steps":["trace[779930748] 'agreement among raft nodes before linearized reading'  (duration: 299.891423ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:03.832808Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.130005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-25T21:12:03.832838Z","caller":"traceutil/trace.go:171","msg":"trace[1487520939] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:388; }","duration":"300.169876ms","start":"2023-10-25T21:12:03.532662Z","end":"2023-10-25T21:12:03.832832Z","steps":["trace[1487520939] 'agreement among raft nodes before linearized reading'  (duration: 300.116771ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:03.832897Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-25T21:12:03.532656Z","time spent":"300.232352ms","remote":"127.0.0.1:49518","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" "}
	{"level":"info","ts":"2023-10-25T21:12:04.038422Z","caller":"traceutil/trace.go:171","msg":"trace[2033627025] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"103.251871ms","start":"2023-10-25T21:12:03.935148Z","end":"2023-10-25T21:12:04.0384Z","steps":["trace[2033627025] 'process raft request'  (duration: 103.010221ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:04.132223Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.60448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"warn","ts":"2023-10-25T21:12:04.137133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.095411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-25T21:12:04.137205Z","caller":"traceutil/trace.go:171","msg":"trace[1774568463] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:391; }","duration":"199.176653ms","start":"2023-10-25T21:12:03.938013Z","end":"2023-10-25T21:12:04.13719Z","steps":["trace[1774568463] 'agreement among raft nodes before linearized reading'  (duration: 100.832156ms)","trace[1774568463] 'range keys from in-memory index tree'  (duration: 98.244259ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-25T21:12:04.137589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.936032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-10-25T21:12:04.137642Z","caller":"traceutil/trace.go:171","msg":"trace[1635521950] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:391; }","duration":"199.993119ms","start":"2023-10-25T21:12:03.937639Z","end":"2023-10-25T21:12:04.137632Z","steps":["trace[1635521950] 'agreement among raft nodes before linearized reading'  (duration: 199.895616ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:12:04.13244Z","caller":"traceutil/trace.go:171","msg":"trace[1563126748] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:391; }","duration":"197.818422ms","start":"2023-10-25T21:12:03.934578Z","end":"2023-10-25T21:12:04.132396Z","steps":["trace[1563126748] 'agreement among raft nodes before linearized reading'  (duration: 104.151294ms)","trace[1563126748] 'range keys from in-memory index tree'  (duration: 93.368661ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-25T21:12:04.827927Z","caller":"traceutil/trace.go:171","msg":"trace[423696568] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"100.819564ms","start":"2023-10-25T21:12:04.727086Z","end":"2023-10-25T21:12:04.827906Z","steps":["trace[423696568] 'process raft request'  (duration: 100.73397ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:12:04.827999Z","caller":"traceutil/trace.go:171","msg":"trace[1786168989] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"101.021154ms","start":"2023-10-25T21:12:04.726957Z","end":"2023-10-25T21:12:04.827978Z","steps":["trace[1786168989] 'process raft request'  (duration: 99.820098ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:12:04.828161Z","caller":"traceutil/trace.go:171","msg":"trace[969602590] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"101.041772ms","start":"2023-10-25T21:12:04.727107Z","end":"2023-10-25T21:12:04.828149Z","steps":["trace[969602590] 'process raft request'  (duration: 100.740708ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:12:04.828206Z","caller":"traceutil/trace.go:171","msg":"trace[2091919188] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"101.168732ms","start":"2023-10-25T21:12:04.727027Z","end":"2023-10-25T21:12:04.828196Z","steps":["trace[2091919188] 'process raft request'  (duration: 100.753583ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:12:04.828339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.188233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-25T21:12:04.828378Z","caller":"traceutil/trace.go:171","msg":"trace[841340284] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:424; }","duration":"101.239217ms","start":"2023-10-25T21:12:04.72713Z","end":"2023-10-25T21:12:04.828369Z","steps":["trace[841340284] 'agreement among raft nodes before linearized reading'  (duration: 101.143963ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:13:15.982032Z","caller":"traceutil/trace.go:171","msg":"trace[494949069] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"122.196482ms","start":"2023-10-25T21:13:15.859815Z","end":"2023-10-25T21:13:15.982011Z","steps":["trace[494949069] 'process raft request'  (duration: 59.54658ms)","trace[494949069] 'compare'  (duration: 62.557567ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [7cba49f7404b6f22d6585bd4a1b8628a39648fc0075af251e5a53bbd5c197034] <==
	* 2023/10/25 21:13:12 GCP Auth Webhook started!
	2023/10/25 21:13:23 Ready to marshal response ...
	2023/10/25 21:13:23 Ready to write response ...
	2023/10/25 21:13:28 Ready to marshal response ...
	2023/10/25 21:13:28 Ready to write response ...
	2023/10/25 21:13:29 Ready to marshal response ...
	2023/10/25 21:13:29 Ready to write response ...
	2023/10/25 21:13:29 Ready to marshal response ...
	2023/10/25 21:13:29 Ready to write response ...
	2023/10/25 21:13:31 Ready to marshal response ...
	2023/10/25 21:13:31 Ready to write response ...
	2023/10/25 21:13:36 Ready to marshal response ...
	2023/10/25 21:13:36 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:13:44 up 56 min,  0 users,  load average: 1.69, 0.96, 0.38
	Linux addons-276457 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5f7b376083c552e6a3037924a190823a01c9f75128f40fda99fe66afc09b5fd5] <==
	* I1025 21:12:01.650852       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1025 21:12:01.650917       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1025 21:12:01.651043       1 main.go:116] setting mtu 1500 for CNI 
	I1025 21:12:01.651059       1 main.go:146] kindnetd IP family: "ipv4"
	I1025 21:12:01.651080       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1025 21:12:32.674527       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1025 21:12:32.681472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:12:32.681502       1 main.go:227] handling current node
	I1025 21:12:42.694224       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:12:42.694249       1 main.go:227] handling current node
	I1025 21:12:52.706255       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:12:52.706301       1 main.go:227] handling current node
	I1025 21:13:02.727554       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:13:02.727582       1 main.go:227] handling current node
	I1025 21:13:12.737174       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:13:12.737201       1 main.go:227] handling current node
	I1025 21:13:22.746248       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:13:22.746269       1 main.go:227] handling current node
	I1025 21:13:32.750370       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:13:32.750392       1 main.go:227] handling current node
	I1025 21:13:42.761589       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:13:42.761612       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [7bd55dfbf9f638594bd4e3dc7a593f548a86bb9472bda78a0e2308cc6278c607] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1025 21:12:10.542532       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.77.52"}
	W1025 21:12:33.038238       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.77.52:443: connect: connection refused
	E1025 21:12:33.038310       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.77.52:443: connect: connection refused
	W1025 21:12:33.038485       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.77.52:443: connect: connection refused
	E1025 21:12:33.038519       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.77.52:443: connect: connection refused
	W1025 21:12:33.064485       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.77.52:443: connect: connection refused
	E1025 21:12:33.064511       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.77.52:443: connect: connection refused
	I1025 21:12:44.345080       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1025 21:12:50.627170       1 handler_proxy.go:93] no RequestInfo found in the context
	E1025 21:12:50.627241       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1025 21:12:50.627564       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 21:12:50.627841       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.40.178:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.40.178:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.40.178:443: connect: connection refused
	E1025 21:12:50.629845       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.40.178:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.40.178:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.40.178:443: connect: connection refused
	I1025 21:12:50.665680       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1025 21:13:23.399540       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1025 21:13:23.405379       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1025 21:13:24.428564       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1025 21:13:24.748047       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.21:51052: read: connection reset by peer
	W1025 21:13:29.446625       1 dispatcher.go:217] Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.110.255.147:443: connect: connection refused
	W1025 21:13:30.207539       1 dispatcher.go:217] Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.110.255.147:443: connect: connection refused
	I1025 21:13:31.634212       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 21:13:31.900764       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.38.118"}
	I1025 21:13:44.348584       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [375b113702a156fe1ccf54013e40938a8a1c8cc66b19265a04d47d2f0372677a] <==
	* I1025 21:13:28.036148       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1025 21:13:29.052685       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1025 21:13:29.238254       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1025 21:13:29.238321       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1025 21:13:29.329145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="62.702211ms"
	I1025 21:13:29.329317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="60.957µs"
	I1025 21:13:29.372718       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1025 21:13:29.372825       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1025 21:13:29.628747       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1025 21:13:29.628786       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 21:13:29.947344       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1025 21:13:29.947383       1 shared_informer.go:318] Caches are synced for garbage collector
	W1025 21:13:31.120885       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:13:31.120918       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1025 21:13:31.546375       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="14.399µs"
	I1025 21:13:31.685879       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="5.493581ms"
	I1025 21:13:31.685992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="68.994µs"
	I1025 21:13:33.526741       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I1025 21:13:35.006320       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1025 21:13:35.022071       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1025 21:13:37.496174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="8.504µs"
	I1025 21:13:42.634133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-56665cdfc" duration="5.95µs"
	W1025 21:13:42.913295       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:13:42.913326       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1025 21:13:44.373063       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	* 
	* ==> kube-proxy [ce84af55679689b692ba4b8d7bb3dec0838ce24c452f4e2601e331cf53a83570] <==
	* I1025 21:12:03.628858       1 server_others.go:69] "Using iptables proxy"
	I1025 21:12:03.930680       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1025 21:12:04.332294       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 21:12:04.340788       1 server_others.go:152] "Using iptables Proxier"
	I1025 21:12:04.340902       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 21:12:04.340941       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 21:12:04.341003       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 21:12:04.341264       1 server.go:846] "Version info" version="v1.28.3"
	I1025 21:12:04.341575       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:12:04.342520       1 config.go:188] "Starting service config controller"
	I1025 21:12:04.441906       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 21:12:04.343041       1 config.go:97] "Starting endpoint slice config controller"
	I1025 21:12:04.343507       1 config.go:315] "Starting node config controller"
	I1025 21:12:04.442042       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 21:12:04.442059       1 shared_informer.go:318] Caches are synced for node config
	I1025 21:12:04.442066       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 21:12:04.442071       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 21:12:04.442076       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [03b775e92e4fbfcba74879e770c73eb07f1537c7937814e57a39084de34c1676] <==
	* E1025 21:11:44.542675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:44.542525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 21:11:44.542454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 21:11:44.542795       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 21:11:44.542466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 21:11:44.542866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 21:11:44.542566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:11:44.542989       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 21:11:44.542603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:11:44.543037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 21:11:44.543132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:44.543180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 21:11:45.352724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:45.352776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 21:11:45.432530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:45.432566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 21:11:45.438905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:11:45.438953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 21:11:45.465298       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 21:11:45.465330       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 21:11:45.484432       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:11:45.484465       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 21:11:45.531879       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 21:11:45.531910       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1025 21:11:48.036797       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 25 21:13:37 addons-276457 kubelet[1562]: E1025 21:13:37.966657    1562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea01fdc38abebac818f59a2078254620d56c9546fc3017d762e3377eaf2ad04\": container with ID starting with eea01fdc38abebac818f59a2078254620d56c9546fc3017d762e3377eaf2ad04 not found: ID does not exist" containerID="eea01fdc38abebac818f59a2078254620d56c9546fc3017d762e3377eaf2ad04"
	Oct 25 21:13:37 addons-276457 kubelet[1562]: I1025 21:13:37.966701    1562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea01fdc38abebac818f59a2078254620d56c9546fc3017d762e3377eaf2ad04"} err="failed to get container status \"eea01fdc38abebac818f59a2078254620d56c9546fc3017d762e3377eaf2ad04\": rpc error: code = NotFound desc = could not find container \"eea01fdc38abebac818f59a2078254620d56c9546fc3017d762e3377eaf2ad04\": container with ID starting with eea01fdc38abebac818f59a2078254620d56c9546fc3017d762e3377eaf2ad04 not found: ID does not exist"
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.167801    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7s9r\" (UniqueName: \"kubernetes.io/projected/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-kube-api-access-l7s9r\") pod \"483c2b37-35f0-44e5-aa85-e7b748f0d4f0\" (UID: \"483c2b37-35f0-44e5-aa85-e7b748f0d4f0\") "
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.167853    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-gcp-creds\") pod \"483c2b37-35f0-44e5-aa85-e7b748f0d4f0\" (UID: \"483c2b37-35f0-44e5-aa85-e7b748f0d4f0\") "
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.167887    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-script\") pod \"483c2b37-35f0-44e5-aa85-e7b748f0d4f0\" (UID: \"483c2b37-35f0-44e5-aa85-e7b748f0d4f0\") "
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.167909    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-data\") pod \"483c2b37-35f0-44e5-aa85-e7b748f0d4f0\" (UID: \"483c2b37-35f0-44e5-aa85-e7b748f0d4f0\") "
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.167980    1562 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-data" (OuterVolumeSpecName: "data") pod "483c2b37-35f0-44e5-aa85-e7b748f0d4f0" (UID: "483c2b37-35f0-44e5-aa85-e7b748f0d4f0"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.167984    1562 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "483c2b37-35f0-44e5-aa85-e7b748f0d4f0" (UID: "483c2b37-35f0-44e5-aa85-e7b748f0d4f0"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.170471    1562 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-script" (OuterVolumeSpecName: "script") pod "483c2b37-35f0-44e5-aa85-e7b748f0d4f0" (UID: "483c2b37-35f0-44e5-aa85-e7b748f0d4f0"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.171868    1562 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-kube-api-access-l7s9r" (OuterVolumeSpecName: "kube-api-access-l7s9r") pod "483c2b37-35f0-44e5-aa85-e7b748f0d4f0" (UID: "483c2b37-35f0-44e5-aa85-e7b748f0d4f0"). InnerVolumeSpecName "kube-api-access-l7s9r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.268612    1562 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-gcp-creds\") on node \"addons-276457\" DevicePath \"\""
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.268644    1562 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-script\") on node \"addons-276457\" DevicePath \"\""
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.268656    1562 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-data\") on node \"addons-276457\" DevicePath \"\""
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.268669    1562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l7s9r\" (UniqueName: \"kubernetes.io/projected/483c2b37-35f0-44e5-aa85-e7b748f0d4f0-kube-api-access-l7s9r\") on node \"addons-276457\" DevicePath \"\""
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.346390    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a592e92f-1bee-4d45-b641-bcd64d215d00" path="/var/lib/kubelet/pods/a592e92f-1bee-4d45-b641-bcd64d215d00/volumes"
	Oct 25 21:13:39 addons-276457 kubelet[1562]: I1025 21:13:39.952550    1562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91629c28162590356bff8c0bac5978e25fa57f334537facec33e7fb8aeba55c"
	Oct 25 21:13:42 addons-276457 kubelet[1562]: I1025 21:13:42.961765    1562 scope.go:117] "RemoveContainer" containerID="e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0"
	Oct 25 21:13:42 addons-276457 kubelet[1562]: I1025 21:13:42.978765    1562 scope.go:117] "RemoveContainer" containerID="e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0"
	Oct 25 21:13:42 addons-276457 kubelet[1562]: E1025 21:13:42.979175    1562 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0\": container with ID starting with e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0 not found: ID does not exist" containerID="e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0"
	Oct 25 21:13:42 addons-276457 kubelet[1562]: I1025 21:13:42.979216    1562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0"} err="failed to get container status \"e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0\": rpc error: code = NotFound desc = could not find container \"e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0\": container with ID starting with e33347aa0c17d61e30efd023e9fd21ae8fcfef8f72911434029e0d83b7160ff0 not found: ID does not exist"
	Oct 25 21:13:43 addons-276457 kubelet[1562]: I1025 21:13:43.028475    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr2v5\" (UniqueName: \"kubernetes.io/projected/9a49db78-b481-4deb-ab50-228a4e85728c-kube-api-access-nr2v5\") pod \"9a49db78-b481-4deb-ab50-228a4e85728c\" (UID: \"9a49db78-b481-4deb-ab50-228a4e85728c\") "
	Oct 25 21:13:43 addons-276457 kubelet[1562]: I1025 21:13:43.030420    1562 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a49db78-b481-4deb-ab50-228a4e85728c-kube-api-access-nr2v5" (OuterVolumeSpecName: "kube-api-access-nr2v5") pod "9a49db78-b481-4deb-ab50-228a4e85728c" (UID: "9a49db78-b481-4deb-ab50-228a4e85728c"). InnerVolumeSpecName "kube-api-access-nr2v5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 21:13:43 addons-276457 kubelet[1562]: I1025 21:13:43.129288    1562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nr2v5\" (UniqueName: \"kubernetes.io/projected/9a49db78-b481-4deb-ab50-228a4e85728c-kube-api-access-nr2v5\") on node \"addons-276457\" DevicePath \"\""
	Oct 25 21:13:43 addons-276457 kubelet[1562]: I1025 21:13:43.347335    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="483c2b37-35f0-44e5-aa85-e7b748f0d4f0" path="/var/lib/kubelet/pods/483c2b37-35f0-44e5-aa85-e7b748f0d4f0/volumes"
	Oct 25 21:13:43 addons-276457 kubelet[1562]: I1025 21:13:43.347768    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9a49db78-b481-4deb-ab50-228a4e85728c" path="/var/lib/kubelet/pods/9a49db78-b481-4deb-ab50-228a4e85728c/volumes"
	
	* 
	* ==> storage-provisioner [992887bfba63158b15b41b7b4c6c040773c88e0814e92b79b9186090a6a838b6] <==
	* I1025 21:12:34.111323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 21:12:34.128290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 21:12:34.128342       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 21:12:34.136801       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 21:12:34.136960       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-276457_050db566-dd91-4398-8055-640c4bf9f606!
	I1025 21:12:34.137967       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a304e69-571b-4a80-abf0-c4402a8dbfb2", APIVersion:"v1", ResourceVersion:"864", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-276457_050db566-dd91-4398-8055-640c4bf9f606 became leader
	I1025 21:12:34.237193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-276457_050db566-dd91-4398-8055-640c4bf9f606!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-276457 -n addons-276457
helpers_test.go:261: (dbg) Run:  kubectl --context addons-276457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-dp2tl ingress-nginx-admission-patch-c27zh
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-276457 describe pod ingress-nginx-admission-create-dp2tl ingress-nginx-admission-patch-c27zh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-276457 describe pod ingress-nginx-admission-create-dp2tl ingress-nginx-admission-patch-c27zh: exit status 1 (54.844772ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dp2tl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-c27zh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-276457 describe pod ingress-nginx-admission-create-dp2tl ingress-nginx-admission-patch-c27zh: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (2.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (175.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-620621 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-620621 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.620887257s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-620621 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-620621 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5c48d254-bdb0-4b8b-8240-d87f27ab7401] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5c48d254-bdb0-4b8b-8240-d87f27ab7401] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.007084349s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-620621 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1025 21:23:17.917830   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:23:45.602083   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-620621 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.087924499s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-620621 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-620621 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.007195642s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-620621 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-620621 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-620621 addons disable ingress --alsologtostderr -v=1: (7.404428745s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-620621
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-620621:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1a7ddd86d6c4a01989da477c73ab630aa0fec24eeb0e7dbdc438f064ae299440",
	        "Created": "2023-10-25T21:20:48.427213746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 60153,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-25T21:20:48.716868823Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/1a7ddd86d6c4a01989da477c73ab630aa0fec24eeb0e7dbdc438f064ae299440/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1a7ddd86d6c4a01989da477c73ab630aa0fec24eeb0e7dbdc438f064ae299440/hostname",
	        "HostsPath": "/var/lib/docker/containers/1a7ddd86d6c4a01989da477c73ab630aa0fec24eeb0e7dbdc438f064ae299440/hosts",
	        "LogPath": "/var/lib/docker/containers/1a7ddd86d6c4a01989da477c73ab630aa0fec24eeb0e7dbdc438f064ae299440/1a7ddd86d6c4a01989da477c73ab630aa0fec24eeb0e7dbdc438f064ae299440-json.log",
	        "Name": "/ingress-addon-legacy-620621",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-620621:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-620621",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8ec4e008c85506f36d2a33add9e5adad9d1dcedc0bfa31e8a1c934e059480a80-init/diff:/var/lib/docker/overlay2/08f48c2099646ae35740a1c0f07609c9eefd4a79bbbda6d2c067385f70ad62be/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ec4e008c85506f36d2a33add9e5adad9d1dcedc0bfa31e8a1c934e059480a80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ec4e008c85506f36d2a33add9e5adad9d1dcedc0bfa31e8a1c934e059480a80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ec4e008c85506f36d2a33add9e5adad9d1dcedc0bfa31e8a1c934e059480a80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-620621",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-620621/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-620621",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-620621",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-620621",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9610bf28021832a31ed1c27feae11314b05af26733eacf7d7ed96b5135fa128a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9610bf280218",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-620621": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1a7ddd86d6c4",
	                        "ingress-addon-legacy-620621"
	                    ],
	                    "NetworkID": "0b3f2736f618b21828e372007a9ec01c97de4d8eb6e1358608009757609e8a95",
	                    "EndpointID": "06a92037b5692cffa0ae4ea53eab9cfdb2babad21afaead864a800bd2e63b914",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-620621 -n ingress-addon-legacy-620621
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-620621 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-620621 logs -n 25: (1.057224836s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-947891 image ls                                                   | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	| image   | functional-947891 image rm                                                   | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-947891                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-947891 image save --daemon                                        | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-947891                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-947891 image ls                                                   | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	| image   | functional-947891 image load                                                 | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-947891                                                            | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	|         | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-947891 image ls                                                   | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	| ssh     | functional-947891 ssh pgrep                                                  | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC |                     |
	|         | buildkitd                                                                    |                             |         |         |                     |                     |
	| image   | functional-947891                                                            | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC |                     |
	|         | image ls --format short                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-947891 image save --daemon                                        | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-947891                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-947891                                                            | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC |                     |
	|         | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh     | functional-947891 ssh pgrep                                                  | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC |                     |
	|         | buildkitd                                                                    |                             |         |         |                     |                     |
	| image   | functional-947891                                                            | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	|         | image ls --format short                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-947891                                                            | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	|         | image ls --format json                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-947891                                                            | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	|         | image ls --format table                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-947891 image build -t                                             | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	|         | localhost/my-image:functional-947891                                         |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image   | functional-947891 image ls                                                   | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	| delete  | -p functional-947891                                                         | functional-947891           | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:20 UTC |
	| start   | -p ingress-addon-legacy-620621                                               | ingress-addon-legacy-620621 | jenkins | v1.31.2 | 25 Oct 23 21:20 UTC | 25 Oct 23 21:21 UTC |
	|         | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                     |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-620621                                                  | ingress-addon-legacy-620621 | jenkins | v1.31.2 | 25 Oct 23 21:21 UTC | 25 Oct 23 21:21 UTC |
	|         | addons enable ingress                                                        |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-620621                                                  | ingress-addon-legacy-620621 | jenkins | v1.31.2 | 25 Oct 23 21:21 UTC | 25 Oct 23 21:21 UTC |
	|         | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-620621                                                  | ingress-addon-legacy-620621 | jenkins | v1.31.2 | 25 Oct 23 21:22 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-620621 ip                                               | ingress-addon-legacy-620621 | jenkins | v1.31.2 | 25 Oct 23 21:24 UTC | 25 Oct 23 21:24 UTC |
	| addons  | ingress-addon-legacy-620621                                                  | ingress-addon-legacy-620621 | jenkins | v1.31.2 | 25 Oct 23 21:24 UTC | 25 Oct 23 21:24 UTC |
	|         | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-620621                                                  | ingress-addon-legacy-620621 | jenkins | v1.31.2 | 25 Oct 23 21:24 UTC | 25 Oct 23 21:24 UTC |
	|         | addons disable ingress                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:20:35
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:20:35.285048   59529 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:20:35.285192   59529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:20:35.285200   59529 out.go:309] Setting ErrFile to fd 2...
	I1025 21:20:35.285204   59529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:20:35.285381   59529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:20:35.285934   59529 out.go:303] Setting JSON to false
	I1025 21:20:35.287053   59529 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3784,"bootTime":1698265051,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:20:35.287113   59529 start.go:138] virtualization: kvm guest
	I1025 21:20:35.289601   59529 out.go:177] * [ingress-addon-legacy-620621] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:20:35.291333   59529 notify.go:220] Checking for updates...
	I1025 21:20:35.292960   59529 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:20:35.294882   59529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:20:35.296445   59529 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:20:35.297953   59529 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:20:35.299572   59529 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:20:35.301035   59529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:20:35.302784   59529 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:20:35.324272   59529 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:20:35.324352   59529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:20:35.376952   59529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-10-25 21:20:35.367077312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:20:35.377091   59529 docker.go:295] overlay module found
	I1025 21:20:35.380402   59529 out.go:177] * Using the docker driver based on user configuration
	I1025 21:20:35.382154   59529 start.go:298] selected driver: docker
	I1025 21:20:35.382173   59529 start.go:902] validating driver "docker" against <nil>
	I1025 21:20:35.382184   59529 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:20:35.382941   59529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:20:35.432523   59529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-10-25 21:20:35.424779458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:20:35.432708   59529 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 21:20:35.432907   59529 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:20:35.434960   59529 out.go:177] * Using Docker driver with root privileges
	I1025 21:20:35.436650   59529 cni.go:84] Creating CNI manager for ""
	I1025 21:20:35.436677   59529 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:20:35.436691   59529 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 21:20:35.436708   59529 start_flags.go:323] config:
	{Name:ingress-addon-legacy-620621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-620621 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:20:35.438553   59529 out.go:177] * Starting control plane node ingress-addon-legacy-620621 in cluster ingress-addon-legacy-620621
	I1025 21:20:35.440168   59529 cache.go:121] Beginning downloading kic base image for docker with crio
	I1025 21:20:35.441665   59529 out.go:177] * Pulling base image ...
	I1025 21:20:35.443054   59529 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1025 21:20:35.443102   59529 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:20:35.462140   59529 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 21:20:35.462162   59529 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 21:20:35.464305   59529 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1025 21:20:35.464325   59529 cache.go:56] Caching tarball of preloaded images
	I1025 21:20:35.464457   59529 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1025 21:20:35.466429   59529 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1025 21:20:35.468061   59529 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1025 21:20:35.496677   59529 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1025 21:20:40.189879   59529 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1025 21:20:40.189973   59529 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1025 21:20:41.189962   59529 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1025 21:20:41.190349   59529 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/config.json ...
	I1025 21:20:41.190384   59529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/config.json: {Name:mk28bb49d2bb24f31833e83cd3ff31306a510d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:20:41.190588   59529 cache.go:194] Successfully downloaded all kic artifacts
	I1025 21:20:41.190621   59529 start.go:365] acquiring machines lock for ingress-addon-legacy-620621: {Name:mk7d9dcb8fe21d49b2fb134390db6655ac1f16d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:20:41.190673   59529 start.go:369] acquired machines lock for "ingress-addon-legacy-620621" in 40.943µs
	I1025 21:20:41.190699   59529 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-620621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-620621 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:20:41.190824   59529 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:20:41.194345   59529 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 21:20:41.194590   59529 start.go:159] libmachine.API.Create for "ingress-addon-legacy-620621" (driver="docker")
	I1025 21:20:41.194632   59529 client.go:168] LocalClient.Create starting
	I1025 21:20:41.194705   59529 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem
	I1025 21:20:41.194742   59529 main.go:141] libmachine: Decoding PEM data...
	I1025 21:20:41.194765   59529 main.go:141] libmachine: Parsing certificate...
	I1025 21:20:41.194831   59529 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem
	I1025 21:20:41.194860   59529 main.go:141] libmachine: Decoding PEM data...
	I1025 21:20:41.194880   59529 main.go:141] libmachine: Parsing certificate...
	I1025 21:20:41.195199   59529 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-620621 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:20:41.210793   59529 cli_runner.go:211] docker network inspect ingress-addon-legacy-620621 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:20:41.210858   59529 network_create.go:281] running [docker network inspect ingress-addon-legacy-620621] to gather additional debugging logs...
	I1025 21:20:41.210884   59529 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-620621
	W1025 21:20:41.225599   59529 cli_runner.go:211] docker network inspect ingress-addon-legacy-620621 returned with exit code 1
	I1025 21:20:41.225630   59529 network_create.go:284] error running [docker network inspect ingress-addon-legacy-620621]: docker network inspect ingress-addon-legacy-620621: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-620621 not found
	I1025 21:20:41.225647   59529 network_create.go:286] output of [docker network inspect ingress-addon-legacy-620621]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-620621 not found
	
	** /stderr **
	I1025 21:20:41.225729   59529 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:20:41.241039   59529 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006bed00}
	I1025 21:20:41.241081   59529 network_create.go:124] attempt to create docker network ingress-addon-legacy-620621 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:20:41.241124   59529 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-620621 ingress-addon-legacy-620621
	I1025 21:20:41.293556   59529 network_create.go:108] docker network ingress-addon-legacy-620621 192.168.49.0/24 created
	I1025 21:20:41.293592   59529 kic.go:118] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-620621" container
	I1025 21:20:41.293651   59529 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:20:41.308687   59529 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-620621 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-620621 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:20:41.325767   59529 oci.go:103] Successfully created a docker volume ingress-addon-legacy-620621
	I1025 21:20:41.325840   59529 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-620621-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-620621 --entrypoint /usr/bin/test -v ingress-addon-legacy-620621:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 21:20:43.089022   59529 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-620621-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-620621 --entrypoint /usr/bin/test -v ingress-addon-legacy-620621:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (1.763110723s)
	I1025 21:20:43.089057   59529 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-620621
	I1025 21:20:43.089073   59529 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1025 21:20:43.089096   59529 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 21:20:43.089157   59529 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-620621:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 21:20:48.363565   59529 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-620621:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.274373218s)
	I1025 21:20:48.363601   59529 kic.go:200] duration metric: took 5.274503 seconds to extract preloaded images to volume
	W1025 21:20:48.363730   59529 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 21:20:48.363814   59529 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 21:20:48.412977   59529 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-620621 --name ingress-addon-legacy-620621 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-620621 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-620621 --network ingress-addon-legacy-620621 --ip 192.168.49.2 --volume ingress-addon-legacy-620621:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 21:20:48.725052   59529 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-620621 --format={{.State.Running}}
	I1025 21:20:48.742082   59529 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-620621 --format={{.State.Status}}
	I1025 21:20:48.759765   59529 cli_runner.go:164] Run: docker exec ingress-addon-legacy-620621 stat /var/lib/dpkg/alternatives/iptables
	I1025 21:20:48.799124   59529 oci.go:144] the created container "ingress-addon-legacy-620621" has a running status.
	I1025 21:20:48.799154   59529 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/ingress-addon-legacy-620621/id_rsa...
	I1025 21:20:48.904754   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/ingress-addon-legacy-620621/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 21:20:48.904813   59529 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17488-11542/.minikube/machines/ingress-addon-legacy-620621/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 21:20:48.923392   59529 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-620621 --format={{.State.Status}}
	I1025 21:20:48.938976   59529 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 21:20:48.938995   59529 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-620621 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 21:20:49.004114   59529 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-620621 --format={{.State.Status}}
	I1025 21:20:49.019280   59529 machine.go:88] provisioning docker machine ...
	I1025 21:20:49.019316   59529 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-620621"
	I1025 21:20:49.019376   59529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-620621
	I1025 21:20:49.036700   59529 main.go:141] libmachine: Using SSH client type: native
	I1025 21:20:49.037089   59529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1025 21:20:49.037111   59529 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-620621 && echo "ingress-addon-legacy-620621" | sudo tee /etc/hostname
	I1025 21:20:49.037752   59529 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48250->127.0.0.1:32787: read: connection reset by peer
	I1025 21:20:52.167507   59529 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-620621
	
	I1025 21:20:52.167581   59529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-620621
	I1025 21:20:52.183508   59529 main.go:141] libmachine: Using SSH client type: native
	I1025 21:20:52.183853   59529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1025 21:20:52.183873   59529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-620621' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-620621/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-620621' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:20:52.301984   59529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:20:52.302010   59529 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17488-11542/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-11542/.minikube}
	I1025 21:20:52.302036   59529 ubuntu.go:177] setting up certificates
	I1025 21:20:52.302046   59529 provision.go:83] configureAuth start
	I1025 21:20:52.302099   59529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-620621
	I1025 21:20:52.318783   59529 provision.go:138] copyHostCerts
	I1025 21:20:52.318819   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem
	I1025 21:20:52.318845   59529 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem, removing ...
	I1025 21:20:52.318856   59529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem
	I1025 21:20:52.318936   59529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem (1078 bytes)
	I1025 21:20:52.319017   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem
	I1025 21:20:52.319036   59529 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem, removing ...
	I1025 21:20:52.319044   59529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem
	I1025 21:20:52.319071   59529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem (1123 bytes)
	I1025 21:20:52.319124   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem
	I1025 21:20:52.319142   59529 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem, removing ...
	I1025 21:20:52.319149   59529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem
	I1025 21:20:52.319172   59529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem (1675 bytes)
	I1025 21:20:52.319222   59529 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-620621 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-620621]
	I1025 21:20:52.546524   59529 provision.go:172] copyRemoteCerts
	I1025 21:20:52.546584   59529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:20:52.546620   59529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-620621
	I1025 21:20:52.562590   59529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/ingress-addon-legacy-620621/id_rsa Username:docker}
	I1025 21:20:52.650095   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 21:20:52.650156   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 21:20:52.670187   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 21:20:52.670242   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1025 21:20:52.690450   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 21:20:52.690502   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 21:20:52.710242   59529 provision.go:86] duration metric: configureAuth took 408.183984ms
	I1025 21:20:52.710270   59529 ubuntu.go:193] setting minikube options for container-runtime
	I1025 21:20:52.710467   59529 config.go:182] Loaded profile config "ingress-addon-legacy-620621": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1025 21:20:52.710578   59529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-620621
	I1025 21:20:52.726698   59529 main.go:141] libmachine: Using SSH client type: native
	I1025 21:20:52.727032   59529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1025 21:20:52.727049   59529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 21:20:52.946772   59529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 21:20:52.946822   59529 machine.go:91] provisioned docker machine in 3.927496734s
	I1025 21:20:52.946845   59529 client.go:171] LocalClient.Create took 11.752198158s
	I1025 21:20:52.946864   59529 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-620621" took 11.752271896s
	I1025 21:20:52.946872   59529 start.go:300] post-start starting for "ingress-addon-legacy-620621" (driver="docker")
	I1025 21:20:52.946889   59529 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:20:52.946957   59529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:20:52.947006   59529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-620621
	I1025 21:20:52.964215   59529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/ingress-addon-legacy-620621/id_rsa Username:docker}
	I1025 21:20:53.050925   59529 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:20:53.053678   59529 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 21:20:53.053739   59529 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 21:20:53.053753   59529 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 21:20:53.053765   59529 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 21:20:53.053780   59529 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/addons for local assets ...
	I1025 21:20:53.053838   59529 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/files for local assets ...
	I1025 21:20:53.053931   59529 filesync.go:149] local asset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> 183232.pem in /etc/ssl/certs
	I1025 21:20:53.053942   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> /etc/ssl/certs/183232.pem
	I1025 21:20:53.054042   59529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 21:20:53.061014   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem --> /etc/ssl/certs/183232.pem (1708 bytes)
	I1025 21:20:53.081130   59529 start.go:303] post-start completed in 134.246286ms
	I1025 21:20:53.081464   59529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-620621
	I1025 21:20:53.097556   59529 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/config.json ...
	I1025 21:20:53.097874   59529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:20:53.097931   59529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-620621
	I1025 21:20:53.112888   59529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/ingress-addon-legacy-620621/id_rsa Username:docker}
	I1025 21:20:53.194641   59529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:20:53.198300   59529 start.go:128] duration metric: createHost completed in 12.007449127s
	I1025 21:20:53.198324   59529 start.go:83] releasing machines lock for "ingress-addon-legacy-620621", held for 12.007635824s
	I1025 21:20:53.198384   59529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-620621
	I1025 21:20:53.213198   59529 ssh_runner.go:195] Run: cat /version.json
	I1025 21:20:53.213246   59529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-620621
	I1025 21:20:53.213267   59529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:20:53.213326   59529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-620621
	I1025 21:20:53.229080   59529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/ingress-addon-legacy-620621/id_rsa Username:docker}
	I1025 21:20:53.230023   59529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/ingress-addon-legacy-620621/id_rsa Username:docker}
	I1025 21:20:53.398173   59529 ssh_runner.go:195] Run: systemctl --version
	I1025 21:20:53.402083   59529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 21:20:53.536895   59529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 21:20:53.540882   59529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:20:53.558968   59529 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1025 21:20:53.559038   59529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:20:53.584678   59529 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 21:20:53.584702   59529 start.go:472] detecting cgroup driver to use...
	I1025 21:20:53.584737   59529 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 21:20:53.584781   59529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 21:20:53.597798   59529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:20:53.608068   59529 docker.go:198] disabling cri-docker service (if available) ...
	I1025 21:20:53.608124   59529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:20:53.619880   59529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:20:53.631833   59529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 21:20:53.703941   59529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:20:53.780182   59529 docker.go:214] disabling docker service ...
	I1025 21:20:53.780233   59529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:20:53.796602   59529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:20:53.806372   59529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:20:53.879261   59529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:20:53.957868   59529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:20:53.967611   59529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:20:53.980892   59529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1025 21:20:53.980954   59529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:20:53.989030   59529 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 21:20:53.989078   59529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:20:53.997063   59529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:20:54.005182   59529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:20:54.013783   59529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 21:20:54.021647   59529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 21:20:54.028834   59529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 21:20:54.035973   59529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:20:54.110116   59529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 21:20:54.215727   59529 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 21:20:54.215795   59529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 21:20:54.218973   59529 start.go:540] Will wait 60s for crictl version
	I1025 21:20:54.219027   59529 ssh_runner.go:195] Run: which crictl
	I1025 21:20:54.221895   59529 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 21:20:54.251424   59529 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1025 21:20:54.251502   59529 ssh_runner.go:195] Run: crio --version
	I1025 21:20:54.283452   59529 ssh_runner.go:195] Run: crio --version
	I1025 21:20:54.317253   59529 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1025 21:20:54.318661   59529 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-620621 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:20:54.335154   59529 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 21:20:54.338349   59529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:20:54.347794   59529 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1025 21:20:54.347843   59529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:20:54.390481   59529 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1025 21:20:54.390548   59529 ssh_runner.go:195] Run: which lz4
	I1025 21:20:54.393728   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1025 21:20:54.393806   59529 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1025 21:20:54.396806   59529 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 21:20:54.396828   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1025 21:20:55.305528   59529 crio.go:444] Took 0.911747 seconds to copy over tarball
	I1025 21:20:55.305592   59529 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 21:20:57.537004   59529 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.231383327s)
	I1025 21:20:57.537041   59529 crio.go:451] Took 2.231483 seconds to extract the tarball
	I1025 21:20:57.537050   59529 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 21:20:57.603380   59529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:20:57.632744   59529 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1025 21:20:57.632767   59529 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 21:20:57.632811   59529 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:20:57.632839   59529 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 21:20:57.632857   59529 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 21:20:57.632887   59529 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1025 21:20:57.632917   59529 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1025 21:20:57.633035   59529 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 21:20:57.633041   59529 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 21:20:57.633176   59529 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1025 21:20:57.633927   59529 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 21:20:57.633955   59529 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 21:20:57.633942   59529 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 21:20:57.633963   59529 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 21:20:57.633926   59529 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1025 21:20:57.633996   59529 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1025 21:20:57.633993   59529 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1025 21:20:57.633926   59529 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:20:57.802202   59529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 21:20:57.806054   59529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1025 21:20:57.806231   59529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1025 21:20:57.808942   59529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1025 21:20:57.825985   59529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1025 21:20:57.829218   59529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1025 21:20:57.851495   59529 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1025 21:20:57.851526   59529 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1025 21:20:57.851540   59529 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 21:20:57.851543   59529 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1025 21:20:57.851582   59529 ssh_runner.go:195] Run: which crictl
	I1025 21:20:57.851583   59529 ssh_runner.go:195] Run: which crictl
	I1025 21:20:57.860740   59529 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1025 21:20:57.860782   59529 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1025 21:20:57.860824   59529 ssh_runner.go:195] Run: which crictl
	I1025 21:20:57.861337   59529 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1025 21:20:57.861371   59529 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 21:20:57.861413   59529 ssh_runner.go:195] Run: which crictl
	I1025 21:20:57.872404   59529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1025 21:20:57.936847   59529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:20:57.938989   59529 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1025 21:20:57.939038   59529 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 21:20:57.939066   59529 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1025 21:20:57.939082   59529 ssh_runner.go:195] Run: which crictl
	I1025 21:20:57.939100   59529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1025 21:20:57.939112   59529 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 21:20:57.939143   59529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 21:20:57.939164   59529 ssh_runner.go:195] Run: which crictl
	I1025 21:20:57.939216   59529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1025 21:20:57.939232   59529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1025 21:20:58.035737   59529 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1025 21:20:58.035786   59529 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1025 21:20:58.035832   59529 ssh_runner.go:195] Run: which crictl
	I1025 21:20:58.153220   59529 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1025 21:20:58.153294   59529 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1025 21:20:58.153322   59529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1025 21:20:58.153384   59529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1025 21:20:58.153427   59529 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1025 21:20:58.153457   59529 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1025 21:20:58.153543   59529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 21:20:58.187268   59529 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1025 21:20:58.187335   59529 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1025 21:20:58.187677   59529 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1025 21:20:58.187717   59529 cache_images.go:92] LoadImages completed in 554.938704ms
	W1025 21:20:58.187789   59529 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I1025 21:20:58.187859   59529 ssh_runner.go:195] Run: crio config
	I1025 21:20:58.226157   59529 cni.go:84] Creating CNI manager for ""
	I1025 21:20:58.226178   59529 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:20:58.226197   59529 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 21:20:58.226216   59529 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-620621 NodeName:ingress-addon-legacy-620621 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 21:20:58.226369   59529 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-620621"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 21:20:58.226444   59529 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-620621 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-620621 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 21:20:58.226500   59529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1025 21:20:58.234080   59529 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 21:20:58.234124   59529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 21:20:58.241398   59529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1025 21:20:58.255923   59529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1025 21:20:58.270614   59529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1025 21:20:58.284822   59529 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 21:20:58.287741   59529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:20:58.296673   59529 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621 for IP: 192.168.49.2
	I1025 21:20:58.296696   59529 certs.go:190] acquiring lock for shared ca certs: {Name:mk35413dbabac2652d1fa66d4e17d237360108a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:20:58.296793   59529 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key
	I1025 21:20:58.296832   59529 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key
	I1025 21:20:58.296888   59529 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.key
	I1025 21:20:58.296905   59529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt with IP's: []
	I1025 21:20:58.391002   59529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt ...
	I1025 21:20:58.391030   59529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: {Name:mk61006690a80f1039aeb499c03221dda3384639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:20:58.391197   59529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.key ...
	I1025 21:20:58.391208   59529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.key: {Name:mkf73eb284490ce168c18d2a7146294d679c91ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:20:58.391282   59529 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.key.dd3b5fb2
	I1025 21:20:58.391296   59529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 21:20:58.500893   59529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.crt.dd3b5fb2 ...
	I1025 21:20:58.500921   59529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.crt.dd3b5fb2: {Name:mkab187558203fe4725c8eb6ee82669c5b2f4e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:20:58.501064   59529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.key.dd3b5fb2 ...
	I1025 21:20:58.501074   59529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.key.dd3b5fb2: {Name:mk2391b3b12a96360174b9655b82cbc2d08e4c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:20:58.501136   59529 certs.go:337] copying /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.crt
	I1025 21:20:58.501210   59529 certs.go:341] copying /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.key
	I1025 21:20:58.501263   59529 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/proxy-client.key
	I1025 21:20:58.501276   59529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/proxy-client.crt with IP's: []
	I1025 21:20:58.694955   59529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/proxy-client.crt ...
	I1025 21:20:58.694989   59529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/proxy-client.crt: {Name:mkff817a48554f8a0a1e028a572d34caae28b46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:20:58.695132   59529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/proxy-client.key ...
	I1025 21:20:58.695147   59529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/proxy-client.key: {Name:mka4f4ca6886d61c4e821f0874c22b6e44b319ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:20:58.695215   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 21:20:58.695232   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 21:20:58.695242   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 21:20:58.695255   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 21:20:58.695264   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 21:20:58.695278   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 21:20:58.695291   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 21:20:58.695303   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 21:20:58.695356   59529 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323.pem (1338 bytes)
	W1025 21:20:58.695390   59529 certs.go:433] ignoring /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323_empty.pem, impossibly tiny 0 bytes
	I1025 21:20:58.695401   59529 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 21:20:58.695421   59529 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem (1078 bytes)
	I1025 21:20:58.695443   59529 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem (1123 bytes)
	I1025 21:20:58.695466   59529 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem (1675 bytes)
	I1025 21:20:58.695510   59529 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem (1708 bytes)
	I1025 21:20:58.695533   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323.pem -> /usr/share/ca-certificates/18323.pem
	I1025 21:20:58.695548   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> /usr/share/ca-certificates/183232.pem
	I1025 21:20:58.695560   59529 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:20:58.696095   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 21:20:58.717682   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 21:20:58.737061   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 21:20:58.756670   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 21:20:58.776522   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 21:20:58.796375   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 21:20:58.815612   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 21:20:58.834855   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 21:20:58.854144   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323.pem --> /usr/share/ca-certificates/18323.pem (1338 bytes)
	I1025 21:20:58.873775   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem --> /usr/share/ca-certificates/183232.pem (1708 bytes)
	I1025 21:20:58.893283   59529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 21:20:58.912671   59529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 21:20:58.927829   59529 ssh_runner.go:195] Run: openssl version
	I1025 21:20:58.932709   59529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18323.pem && ln -fs /usr/share/ca-certificates/18323.pem /etc/ssl/certs/18323.pem"
	I1025 21:20:58.940530   59529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18323.pem
	I1025 21:20:58.943459   59529 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:17 /usr/share/ca-certificates/18323.pem
	I1025 21:20:58.943505   59529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18323.pem
	I1025 21:20:58.949458   59529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18323.pem /etc/ssl/certs/51391683.0"
	I1025 21:20:58.957136   59529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183232.pem && ln -fs /usr/share/ca-certificates/183232.pem /etc/ssl/certs/183232.pem"
	I1025 21:20:58.965131   59529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183232.pem
	I1025 21:20:58.968074   59529 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:17 /usr/share/ca-certificates/183232.pem
	I1025 21:20:58.968131   59529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183232.pem
	I1025 21:20:58.974504   59529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183232.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 21:20:58.982221   59529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 21:20:58.989892   59529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:20:58.992737   59529 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:20:58.992789   59529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:20:58.998597   59529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 21:20:59.006386   59529 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 21:20:59.009038   59529 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 21:20:59.009085   59529 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-620621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-620621 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:20:59.009161   59529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 21:20:59.009207   59529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 21:20:59.041808   59529 cri.go:89] found id: ""
	I1025 21:20:59.041864   59529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 21:20:59.049666   59529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 21:20:59.057846   59529 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 21:20:59.057908   59529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 21:20:59.065076   59529 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 21:20:59.065112   59529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 21:20:59.106751   59529 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1025 21:20:59.106834   59529 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 21:20:59.142910   59529 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1025 21:20:59.142982   59529 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1025 21:20:59.143020   59529 kubeadm.go:322] OS: Linux
	I1025 21:20:59.143098   59529 kubeadm.go:322] CGROUPS_CPU: enabled
	I1025 21:20:59.143172   59529 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1025 21:20:59.143239   59529 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1025 21:20:59.143313   59529 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1025 21:20:59.143379   59529 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1025 21:20:59.143454   59529 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1025 21:20:59.208005   59529 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 21:20:59.208138   59529 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 21:20:59.208253   59529 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 21:20:59.380459   59529 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 21:20:59.381245   59529 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 21:20:59.381361   59529 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 21:20:59.449923   59529 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 21:20:59.453772   59529 out.go:204]   - Generating certificates and keys ...
	I1025 21:20:59.453941   59529 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 21:20:59.454051   59529 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 21:20:59.555043   59529 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 21:20:59.615546   59529 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 21:20:59.679809   59529 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 21:20:59.809116   59529 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 21:21:00.029869   59529 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 21:21:00.030043   59529 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-620621 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 21:21:00.228786   59529 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 21:21:00.228963   59529 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-620621 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 21:21:00.790933   59529 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 21:21:00.957539   59529 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 21:21:01.068466   59529 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 21:21:01.068536   59529 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 21:21:01.240437   59529 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 21:21:01.503171   59529 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 21:21:01.840304   59529 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 21:21:01.932142   59529 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 21:21:01.933323   59529 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 21:21:01.935553   59529 out.go:204]   - Booting up control plane ...
	I1025 21:21:01.935644   59529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 21:21:01.938885   59529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 21:21:01.939812   59529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 21:21:01.940458   59529 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 21:21:01.942823   59529 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 21:21:08.945452   59529 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002510 seconds
	I1025 21:21:08.945558   59529 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 21:21:08.955527   59529 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 21:21:09.469945   59529 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 21:21:09.470166   59529 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-620621 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1025 21:21:09.977746   59529 kubeadm.go:322] [bootstrap-token] Using token: gjlej7.ohokjj9k489yr2m2
	I1025 21:21:09.979259   59529 out.go:204]   - Configuring RBAC rules ...
	I1025 21:21:09.979404   59529 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 21:21:09.982350   59529 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 21:21:09.988083   59529 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 21:21:09.989739   59529 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 21:21:09.991502   59529 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 21:21:09.993180   59529 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 21:21:09.999778   59529 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 21:21:10.234120   59529 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 21:21:10.391946   59529 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 21:21:10.394094   59529 kubeadm.go:322] 
	I1025 21:21:10.394176   59529 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 21:21:10.394187   59529 kubeadm.go:322] 
	I1025 21:21:10.394272   59529 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 21:21:10.394312   59529 kubeadm.go:322] 
	I1025 21:21:10.394342   59529 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 21:21:10.394387   59529 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 21:21:10.394429   59529 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 21:21:10.394435   59529 kubeadm.go:322] 
	I1025 21:21:10.394474   59529 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 21:21:10.394534   59529 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 21:21:10.394587   59529 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 21:21:10.394597   59529 kubeadm.go:322] 
	I1025 21:21:10.394663   59529 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 21:21:10.394729   59529 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 21:21:10.394735   59529 kubeadm.go:322] 
	I1025 21:21:10.394799   59529 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gjlej7.ohokjj9k489yr2m2 \
	I1025 21:21:10.394941   59529 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 \
	I1025 21:21:10.394964   59529 kubeadm.go:322]     --control-plane 
	I1025 21:21:10.394970   59529 kubeadm.go:322] 
	I1025 21:21:10.395043   59529 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 21:21:10.395049   59529 kubeadm.go:322] 
	I1025 21:21:10.395119   59529 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gjlej7.ohokjj9k489yr2m2 \
	I1025 21:21:10.395232   59529 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 
	I1025 21:21:10.396959   59529 kubeadm.go:322] W1025 21:20:59.106223    1380 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1025 21:21:10.397226   59529 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1025 21:21:10.397339   59529 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 21:21:10.397457   59529 kubeadm.go:322] W1025 21:21:01.938658    1380 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 21:21:10.397572   59529 kubeadm.go:322] W1025 21:21:01.939647    1380 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 21:21:10.397612   59529 cni.go:84] Creating CNI manager for ""
	I1025 21:21:10.397627   59529 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:21:10.399470   59529 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 21:21:10.401131   59529 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 21:21:10.404627   59529 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1025 21:21:10.404646   59529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 21:21:10.420060   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 21:21:10.853261   59529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 21:21:10.853346   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:10.853348   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=ingress-addon-legacy-620621 minikube.k8s.io/updated_at=2023_10_25T21_21_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:10.970402   59529 ops.go:34] apiserver oom_adj: -16
	I1025 21:21:10.970547   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:11.055022   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:11.633354   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:12.132993   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:12.633658   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:13.134023   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:13.633051   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:14.132937   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:14.633909   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:15.133671   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:15.633093   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:16.133969   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:16.633174   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:17.133107   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:17.633260   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:18.133007   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:18.633617   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:19.133114   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:19.633193   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:20.133228   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:20.633007   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:21.133192   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:21.633413   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:22.133673   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:22.633840   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:23.133940   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:23.633095   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:24.133263   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:24.633953   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:25.133659   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:25.633600   59529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:21:25.746369   59529 kubeadm.go:1081] duration metric: took 14.893097294s to wait for elevateKubeSystemPrivileges.
	I1025 21:21:25.746407   59529 kubeadm.go:406] StartCluster complete in 26.73732412s
	I1025 21:21:25.746464   59529 settings.go:142] acquiring lock: {Name:mkdc9277e8465489704340df47f71845c1a0d579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:21:25.746551   59529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:21:25.747268   59529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/kubeconfig: {Name:mk64fd87b209032b3c81ef85df6a4de19f21a5bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:21:25.747491   59529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 21:21:25.747588   59529 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 21:21:25.747666   59529 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-620621"
	I1025 21:21:25.747689   59529 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-620621"
	I1025 21:21:25.747700   59529 config.go:182] Loaded profile config "ingress-addon-legacy-620621": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1025 21:21:25.747722   59529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-620621"
	I1025 21:21:25.747694   59529 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-620621"
	I1025 21:21:25.747808   59529 host.go:66] Checking if "ingress-addon-legacy-620621" exists ...
	I1025 21:21:25.748135   59529 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-620621 --format={{.State.Status}}
	I1025 21:21:25.748176   59529 kapi.go:59] client config for ingress-addon-legacy-620621: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.key", CAFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:21:25.748324   59529 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-620621 --format={{.State.Status}}
	I1025 21:21:25.748990   59529 cert_rotation.go:137] Starting client certificate rotation controller
	I1025 21:21:25.766774   59529 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-620621" context rescaled to 1 replicas
	I1025 21:21:25.766838   59529 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:21:25.768657   59529 out.go:177] * Verifying Kubernetes components...
	I1025 21:21:25.770318   59529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:21:25.771969   59529 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:21:25.773477   59529 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:21:25.773499   59529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 21:21:25.773551   59529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-620621
	I1025 21:21:25.775706   59529 kapi.go:59] client config for ingress-addon-legacy-620621: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.key", CAFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:21:25.776042   59529 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-620621"
	I1025 21:21:25.776078   59529 host.go:66] Checking if "ingress-addon-legacy-620621" exists ...
	I1025 21:21:25.776616   59529 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-620621 --format={{.State.Status}}
	I1025 21:21:25.793552   59529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/ingress-addon-legacy-620621/id_rsa Username:docker}
	I1025 21:21:25.795145   59529 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 21:21:25.795170   59529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 21:21:25.795218   59529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-620621
	I1025 21:21:25.809596   59529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/ingress-addon-legacy-620621/id_rsa Username:docker}
	I1025 21:21:25.876449   59529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 21:21:25.876924   59529 kapi.go:59] client config for ingress-addon-legacy-620621: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.key", CAFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:21:25.877251   59529 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-620621" to be "Ready" ...
	I1025 21:21:25.944534   59529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:21:25.946652   59529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 21:21:26.431647   59529 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 21:21:26.541802   59529 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 21:21:26.543309   59529 addons.go:502] enable addons completed in 795.719195ms: enabled=[storage-provisioner default-storageclass]
	I1025 21:21:27.884829   59529 node_ready.go:58] node "ingress-addon-legacy-620621" has status "Ready":"False"
	I1025 21:21:29.884890   59529 node_ready.go:58] node "ingress-addon-legacy-620621" has status "Ready":"False"
	I1025 21:21:30.931264   59529 node_ready.go:49] node "ingress-addon-legacy-620621" has status "Ready":"True"
	I1025 21:21:30.931294   59529 node_ready.go:38] duration metric: took 5.05402337s waiting for node "ingress-addon-legacy-620621" to be "Ready" ...
	I1025 21:21:30.931315   59529 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:21:31.038659   59529 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-7wl8q" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:33.284887   59529 pod_ready.go:102] pod "coredns-66bff467f8-7wl8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 21:21:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1025 21:21:35.285570   59529 pod_ready.go:102] pod "coredns-66bff467f8-7wl8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 21:21:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1025 21:21:37.287877   59529 pod_ready.go:102] pod "coredns-66bff467f8-7wl8q" in "kube-system" namespace has status "Ready":"False"
	I1025 21:21:37.787496   59529 pod_ready.go:92] pod "coredns-66bff467f8-7wl8q" in "kube-system" namespace has status "Ready":"True"
	I1025 21:21:37.787518   59529 pod_ready.go:81] duration metric: took 6.748834801s waiting for pod "coredns-66bff467f8-7wl8q" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:37.787527   59529 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-620621" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:37.791195   59529 pod_ready.go:92] pod "etcd-ingress-addon-legacy-620621" in "kube-system" namespace has status "Ready":"True"
	I1025 21:21:37.791214   59529 pod_ready.go:81] duration metric: took 3.681055ms waiting for pod "etcd-ingress-addon-legacy-620621" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:37.791226   59529 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-620621" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:37.794726   59529 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-620621" in "kube-system" namespace has status "Ready":"True"
	I1025 21:21:37.794741   59529 pod_ready.go:81] duration metric: took 3.509908ms waiting for pod "kube-apiserver-ingress-addon-legacy-620621" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:37.794748   59529 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-620621" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:37.818936   59529 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-620621" in "kube-system" namespace has status "Ready":"True"
	I1025 21:21:37.818958   59529 pod_ready.go:81] duration metric: took 24.203748ms waiting for pod "kube-controller-manager-ingress-addon-legacy-620621" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:37.818970   59529 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d7mkv" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:37.822681   59529 pod_ready.go:92] pod "kube-proxy-d7mkv" in "kube-system" namespace has status "Ready":"True"
	I1025 21:21:37.822701   59529 pod_ready.go:81] duration metric: took 3.724355ms waiting for pod "kube-proxy-d7mkv" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:37.822711   59529 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-620621" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:37.983061   59529 request.go:629] Waited for 160.254938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-620621
	I1025 21:21:38.183820   59529 request.go:629] Waited for 198.35975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-620621
	I1025 21:21:38.186317   59529 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-620621" in "kube-system" namespace has status "Ready":"True"
	I1025 21:21:38.186353   59529 pod_ready.go:81] duration metric: took 363.634239ms waiting for pod "kube-scheduler-ingress-addon-legacy-620621" in "kube-system" namespace to be "Ready" ...
	I1025 21:21:38.186367   59529 pod_ready.go:38] duration metric: took 7.255038158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:21:38.186388   59529 api_server.go:52] waiting for apiserver process to appear ...
	I1025 21:21:38.186449   59529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:21:38.197017   59529 api_server.go:72] duration metric: took 12.430146708s to wait for apiserver process to appear ...
	I1025 21:21:38.197036   59529 api_server.go:88] waiting for apiserver healthz status ...
	I1025 21:21:38.197049   59529 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 21:21:38.201413   59529 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 21:21:38.202144   59529 api_server.go:141] control plane version: v1.18.20
	I1025 21:21:38.202165   59529 api_server.go:131] duration metric: took 5.124593ms to wait for apiserver health ...
	I1025 21:21:38.202173   59529 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 21:21:38.383485   59529 request.go:629] Waited for 181.195624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1025 21:21:38.388480   59529 system_pods.go:59] 8 kube-system pods found
	I1025 21:21:38.388508   59529 system_pods.go:61] "coredns-66bff467f8-7wl8q" [ee713c55-a7fd-4df1-8114-bd3ef9252649] Running
	I1025 21:21:38.388513   59529 system_pods.go:61] "etcd-ingress-addon-legacy-620621" [21798cb6-7d05-4bf2-b6b4-eb3b004c4945] Running
	I1025 21:21:38.388517   59529 system_pods.go:61] "kindnet-hz6fr" [83f78e48-8397-4833-919e-215de608326c] Running
	I1025 21:21:38.388524   59529 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-620621" [2711b407-fb1c-4d42-8687-ca2b06fb3fd9] Running
	I1025 21:21:38.388531   59529 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-620621" [655fd32b-27fa-4a89-b7c9-eb2a14043e3f] Running
	I1025 21:21:38.388540   59529 system_pods.go:61] "kube-proxy-d7mkv" [933456b0-23da-4656-94e3-1183f4ff1e6c] Running
	I1025 21:21:38.388550   59529 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-620621" [b3661fe0-e422-4f67-881b-7fccda0a2a9c] Running
	I1025 21:21:38.388556   59529 system_pods.go:61] "storage-provisioner" [61858455-f8e6-4a20-81fa-cbb63df405c1] Running
	I1025 21:21:38.388563   59529 system_pods.go:74] duration metric: took 186.385505ms to wait for pod list to return data ...
	I1025 21:21:38.388570   59529 default_sa.go:34] waiting for default service account to be created ...
	I1025 21:21:38.582904   59529 request.go:629] Waited for 194.273198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1025 21:21:38.585275   59529 default_sa.go:45] found service account: "default"
	I1025 21:21:38.585301   59529 default_sa.go:55] duration metric: took 196.722838ms for default service account to be created ...
	I1025 21:21:38.585311   59529 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 21:21:38.783742   59529 request.go:629] Waited for 198.35047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1025 21:21:38.788308   59529 system_pods.go:86] 8 kube-system pods found
	I1025 21:21:38.788340   59529 system_pods.go:89] "coredns-66bff467f8-7wl8q" [ee713c55-a7fd-4df1-8114-bd3ef9252649] Running
	I1025 21:21:38.788348   59529 system_pods.go:89] "etcd-ingress-addon-legacy-620621" [21798cb6-7d05-4bf2-b6b4-eb3b004c4945] Running
	I1025 21:21:38.788354   59529 system_pods.go:89] "kindnet-hz6fr" [83f78e48-8397-4833-919e-215de608326c] Running
	I1025 21:21:38.788360   59529 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-620621" [2711b407-fb1c-4d42-8687-ca2b06fb3fd9] Running
	I1025 21:21:38.788367   59529 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-620621" [655fd32b-27fa-4a89-b7c9-eb2a14043e3f] Running
	I1025 21:21:38.788372   59529 system_pods.go:89] "kube-proxy-d7mkv" [933456b0-23da-4656-94e3-1183f4ff1e6c] Running
	I1025 21:21:38.788379   59529 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-620621" [b3661fe0-e422-4f67-881b-7fccda0a2a9c] Running
	I1025 21:21:38.788386   59529 system_pods.go:89] "storage-provisioner" [61858455-f8e6-4a20-81fa-cbb63df405c1] Running
	I1025 21:21:38.788397   59529 system_pods.go:126] duration metric: took 203.079448ms to wait for k8s-apps to be running ...
	I1025 21:21:38.788418   59529 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 21:21:38.788464   59529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:21:38.798871   59529 system_svc.go:56] duration metric: took 10.452376ms WaitForService to wait for kubelet.
	I1025 21:21:38.798894   59529 kubeadm.go:581] duration metric: took 13.032025849s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 21:21:38.798913   59529 node_conditions.go:102] verifying NodePressure condition ...
	I1025 21:21:38.983349   59529 request.go:629] Waited for 184.372901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1025 21:21:38.985927   59529 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 21:21:38.985950   59529 node_conditions.go:123] node cpu capacity is 8
	I1025 21:21:38.985961   59529 node_conditions.go:105] duration metric: took 187.042761ms to run NodePressure ...
	I1025 21:21:38.985971   59529 start.go:228] waiting for startup goroutines ...
	I1025 21:21:38.985984   59529 start.go:233] waiting for cluster config update ...
	I1025 21:21:38.985997   59529 start.go:242] writing updated cluster config ...
	I1025 21:21:38.986241   59529 ssh_runner.go:195] Run: rm -f paused
	I1025 21:21:39.029756   59529 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1025 21:21:39.031833   59529 out.go:177] 
	W1025 21:21:39.033468   59529 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1025 21:21:39.034851   59529 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1025 21:21:39.036211   59529 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-620621" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 25 21:24:24 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:24.550027958Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=82bae1fa-e260-49ab-a0da-ade2c6c4cbe6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 25 21:24:35 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:35.549977375Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=1b2559b4-f4e1-4aca-a9a6-c2a3951ea6b6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 25 21:24:36 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:36.550820733Z" level=info msg="Stopping pod sandbox: ffd32bc1c9ccc8fee228b0fb2e465b0adfa1151f1a027064f235a7dd983a7891" id=c3835aa6-38a5-4997-9df0-51d67167c392 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 25 21:24:36 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:36.551684221Z" level=info msg="Stopped pod sandbox: ffd32bc1c9ccc8fee228b0fb2e465b0adfa1151f1a027064f235a7dd983a7891" id=c3835aa6-38a5-4997-9df0-51d67167c392 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 25 21:24:36 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:36.961595516Z" level=info msg="Stopping pod sandbox: ffd32bc1c9ccc8fee228b0fb2e465b0adfa1151f1a027064f235a7dd983a7891" id=349d1247-263a-4fe5-96ad-e92e076da4d4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 25 21:24:36 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:36.961651540Z" level=info msg="Stopped pod sandbox (already stopped): ffd32bc1c9ccc8fee228b0fb2e465b0adfa1151f1a027064f235a7dd983a7891" id=349d1247-263a-4fe5-96ad-e92e076da4d4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 25 21:24:37 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:37.717192435Z" level=info msg="Stopping container: cd080f672024e9abbd246ae3cf317dcc653df1f319300c588424f32c4bd4e6f2 (timeout: 2s)" id=4590f321-6219-468a-b236-faa44e235bd9 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 25 21:24:37 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:37.720224985Z" level=info msg="Stopping container: cd080f672024e9abbd246ae3cf317dcc653df1f319300c588424f32c4bd4e6f2 (timeout: 2s)" id=16010f57-23a1-49b4-bf36-0b7052231b46 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 25 21:24:38 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:38.549751055Z" level=info msg="Stopping pod sandbox: ffd32bc1c9ccc8fee228b0fb2e465b0adfa1151f1a027064f235a7dd983a7891" id=339e5780-f02c-4cfa-afc2-449a228b2cd7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 25 21:24:38 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:38.549807423Z" level=info msg="Stopped pod sandbox (already stopped): ffd32bc1c9ccc8fee228b0fb2e465b0adfa1151f1a027064f235a7dd983a7891" id=339e5780-f02c-4cfa-afc2-449a228b2cd7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.726693683Z" level=warning msg="Stopping container cd080f672024e9abbd246ae3cf317dcc653df1f319300c588424f32c4bd4e6f2 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=4590f321-6219-468a-b236-faa44e235bd9 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 25 21:24:39 ingress-addon-legacy-620621 conmon[3412]: conmon cd080f672024e9abbd24 <ninfo>: container 3424 exited with status 137
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.887920372Z" level=info msg="Stopped container cd080f672024e9abbd246ae3cf317dcc653df1f319300c588424f32c4bd4e6f2: ingress-nginx/ingress-nginx-controller-7fcf777cb7-l9mjq/controller" id=4590f321-6219-468a-b236-faa44e235bd9 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.887969364Z" level=info msg="Stopped container cd080f672024e9abbd246ae3cf317dcc653df1f319300c588424f32c4bd4e6f2: ingress-nginx/ingress-nginx-controller-7fcf777cb7-l9mjq/controller" id=16010f57-23a1-49b4-bf36-0b7052231b46 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.888555773Z" level=info msg="Stopping pod sandbox: 50339ea9c48a19a9a4bf0cf0d723559c89b3f12b9c0e1385a252a78ececab666" id=286ca105-f7d3-462d-b1a3-130316316e7f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.888587512Z" level=info msg="Stopping pod sandbox: 50339ea9c48a19a9a4bf0cf0d723559c89b3f12b9c0e1385a252a78ececab666" id=0bcf5cca-96be-4be2-8072-2e2c1c7620d9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.891393012Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-Z4LC7F6DNY7WGM2T - [0:0]\n:KUBE-HP-TAAD7CHOOHJ7GNP3 - [0:0]\n-X KUBE-HP-Z4LC7F6DNY7WGM2T\n-X KUBE-HP-TAAD7CHOOHJ7GNP3\nCOMMIT\n"
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.892679367Z" level=info msg="Closing host port tcp:80"
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.892709484Z" level=info msg="Closing host port tcp:443"
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.893623665Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.893637194Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.893745083Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-l9mjq Namespace:ingress-nginx ID:50339ea9c48a19a9a4bf0cf0d723559c89b3f12b9c0e1385a252a78ececab666 UID:59597fb3-6446-41d7-8554-d4789df618b0 NetNS:/var/run/netns/7e0e9ad0-4d69-4598-8fd9-b9fe08595007 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.893859824Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-l9mjq from CNI network \"kindnet\" (type=ptp)"
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.939713870Z" level=info msg="Stopped pod sandbox: 50339ea9c48a19a9a4bf0cf0d723559c89b3f12b9c0e1385a252a78ececab666" id=286ca105-f7d3-462d-b1a3-130316316e7f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 25 21:24:39 ingress-addon-legacy-620621 crio[958]: time="2023-10-25 21:24:39.939827112Z" level=info msg="Stopped pod sandbox (already stopped): 50339ea9c48a19a9a4bf0cf0d723559c89b3f12b9c0e1385a252a78ececab666" id=0bcf5cca-96be-4be2-8072-2e2c1c7620d9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7648dbcfa4e72       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6            22 seconds ago      Running             hello-world-app           0                   e254575fa71cb       hello-world-app-5f5d8b66bb-7grvr
	fb7093b2909d3       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   a33cdc12e5939       nginx
	cd080f672024e       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   50339ea9c48a1       ingress-nginx-controller-7fcf777cb7-l9mjq
	94e55c1ad86a9       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   d165ad0f93ca9       ingress-nginx-admission-patch-wft2n
	853962e427193       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   0e776890c4802       ingress-nginx-admission-create-wbx4x
	5440b538f6dc0       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   12a1f95ab7a7e       coredns-66bff467f8-7wl8q
	45130f0e913ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   a1f14aba224b6       storage-provisioner
	ce6bd4db05261       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   e47fec8024f38       kindnet-hz6fr
	280e769815f26       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   95ebe62ecd8b3       kube-proxy-d7mkv
	968a6df7ffefd       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   18d443b0d51a7       etcd-ingress-addon-legacy-620621
	41e614bb5aac5       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   cc80ed3cfccda       kube-controller-manager-ingress-addon-legacy-620621
	8b6cc6e02caaa       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   896c5918b8064       kube-apiserver-ingress-addon-legacy-620621
	986afb2c60cee       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   dae7f54565d69       kube-scheduler-ingress-addon-legacy-620621
	
	* 
	* ==> coredns [5440b538f6dc057cac4625dfb412ddfeae963266842eda30e229c29db96181d8] <==
	* [INFO] 10.244.0.5:54321 - 22726 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005584581s
	[INFO] 10.244.0.5:44207 - 21529 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00387473s
	[INFO] 10.244.0.5:37604 - 16693 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003546677s
	[INFO] 10.244.0.5:58447 - 63115 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003564812s
	[INFO] 10.244.0.5:35480 - 22016 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.0039224s
	[INFO] 10.244.0.5:55818 - 43400 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003580861s
	[INFO] 10.244.0.5:54321 - 23446 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003653085s
	[INFO] 10.244.0.5:54843 - 43408 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004058882s
	[INFO] 10.244.0.5:52693 - 45073 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003978404s
	[INFO] 10.244.0.5:55818 - 49902 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004927079s
	[INFO] 10.244.0.5:37604 - 54090 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005068218s
	[INFO] 10.244.0.5:35480 - 17870 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00505615s
	[INFO] 10.244.0.5:54321 - 796 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005120509s
	[INFO] 10.244.0.5:52693 - 32053 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004655657s
	[INFO] 10.244.0.5:44207 - 38030 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005164481s
	[INFO] 10.244.0.5:58447 - 27500 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005181346s
	[INFO] 10.244.0.5:54843 - 15263 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005062874s
	[INFO] 10.244.0.5:37604 - 13096 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057698s
	[INFO] 10.244.0.5:35480 - 24123 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00008808s
	[INFO] 10.244.0.5:52693 - 62719 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071346s
	[INFO] 10.244.0.5:58447 - 9583 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000124673s
	[INFO] 10.244.0.5:54843 - 8448 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047513s
	[INFO] 10.244.0.5:54321 - 61885 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000263195s
	[INFO] 10.244.0.5:44207 - 63501 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00025439s
	[INFO] 10.244.0.5:55818 - 23036 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000312379s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-620621
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-620621
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=ingress-addon-legacy-620621
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T21_21_10_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:21:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-620621
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 21:24:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:24:40 +0000   Wed, 25 Oct 2023 21:21:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:24:40 +0000   Wed, 25 Oct 2023 21:21:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:24:40 +0000   Wed, 25 Oct 2023 21:21:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 21:24:40 +0000   Wed, 25 Oct 2023 21:21:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-620621
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 16981c672d834163af2b65cb4dd5afc4
	  System UUID:                44b779c4-3252-4222-a194-8ae2899d288b
	  Boot ID:                    34092eb3-c5c2-47c9-ae8c-38e7a764813a
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-7grvr                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-7wl8q                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m20s
	  kube-system                 etcd-ingress-addon-legacy-620621                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kindnet-hz6fr                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m20s
	  kube-system                 kube-apiserver-ingress-addon-legacy-620621             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-620621    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-proxy-d7mkv                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 kube-scheduler-ingress-addon-legacy-620621             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m43s (x5 over 3m43s)  kubelet     Node ingress-addon-legacy-620621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s (x5 over 3m43s)  kubelet     Node ingress-addon-legacy-620621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s (x4 over 3m43s)  kubelet     Node ingress-addon-legacy-620621 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m35s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m35s                  kubelet     Node ingress-addon-legacy-620621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s                  kubelet     Node ingress-addon-legacy-620621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s                  kubelet     Node ingress-addon-legacy-620621 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m19s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m15s                  kubelet     Node ingress-addon-legacy-620621 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004949] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006561] FS-Cache: N-cookie d=00000000021fa65a{9p.inode} n=00000000dab2db8b
	[  +0.008738] FS-Cache: N-key=[8] '78a00f0200000000'
	[  +0.308810] FS-Cache: Duplicate cookie detected
	[  +0.004670] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006750] FS-Cache: O-cookie d=00000000021fa65a{9p.inode} n=0000000092b40cea
	[  +0.007363] FS-Cache: O-key=[8] '81a00f0200000000'
	[  +0.004955] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007980] FS-Cache: N-cookie d=00000000021fa65a{9p.inode} n=00000000471260ab
	[  +0.008707] FS-Cache: N-key=[8] '81a00f0200000000'
	[Oct25 21:20] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct25 21:22] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[  +1.016105] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[  +2.015781] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[  +4.159580] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[  +8.195126] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[ +16.122408] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[Oct25 21:23] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	
	* 
	* ==> etcd [968a6df7ffefd8a00f38c8ab05edde80600c2a90bc26a4972a7c13e017f4b305] <==
	* raft2023/10/25 21:21:03 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-25 21:21:03.438272 W | auth: simple token is not cryptographically signed
	2023-10-25 21:21:03.441395 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2023/10/25 21:21:03 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-25 21:21:03.442424 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-25 21:21:03.442491 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-25 21:21:03.443690 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-25 21:21:03.443848 I | embed: listening for peers on 192.168.49.2:2380
	2023-10-25 21:21:03.443896 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/25 21:21:03 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/25 21:21:03 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/25 21:21:03 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/25 21:21:03 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/25 21:21:03 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-25 21:21:03.734855 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-25 21:21:03.735717 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-25 21:21:03.735770 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-25 21:21:03.735804 I | embed: ready to serve client requests
	2023-10-25 21:21:03.735921 I | etcdserver: published {Name:ingress-addon-legacy-620621 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-25 21:21:03.735940 I | embed: ready to serve client requests
	2023-10-25 21:21:03.738137 I | embed: serving client requests on 192.168.49.2:2379
	2023-10-25 21:21:03.738816 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-25 21:21:30.929853 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-66bff467f8-7wl8q.1791768db963cbd8\" " with result "range_response_count:1 size:829" took too long (155.853953ms) to execute
	2023-10-25 21:21:31.278246 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-7wl8q\" " with result "range_response_count:1 size:3753" took too long (238.77913ms) to execute
	2023-10-25 21:21:32.465955 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-7wl8q\" " with result "range_response_count:1 size:3753" took too long (182.162967ms) to execute
	
	* 
	* ==> kernel <==
	*  21:24:45 up  1:07,  0 users,  load average: 0.48, 0.72, 0.52
	Linux ingress-addon-legacy-620621 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [ce6bd4db05261bb365832f939f1d718835c1ece5970d22c22abbd500f42986e5] <==
	* I1025 21:22:38.576569       1 main.go:227] handling current node
	I1025 21:22:48.588680       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:22:48.588709       1 main.go:227] handling current node
	I1025 21:22:58.592181       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:22:58.592206       1 main.go:227] handling current node
	I1025 21:23:08.604215       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:23:08.604239       1 main.go:227] handling current node
	I1025 21:23:18.608348       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:23:18.608376       1 main.go:227] handling current node
	I1025 21:23:28.612802       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:23:28.612838       1 main.go:227] handling current node
	I1025 21:23:38.616332       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:23:38.616358       1 main.go:227] handling current node
	I1025 21:23:48.625098       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:23:48.625125       1 main.go:227] handling current node
	I1025 21:23:58.633189       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:23:58.633216       1 main.go:227] handling current node
	I1025 21:24:08.636359       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:24:08.636386       1 main.go:227] handling current node
	I1025 21:24:18.643850       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:24:18.643877       1 main.go:227] handling current node
	I1025 21:24:28.648561       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:24:28.648593       1 main.go:227] handling current node
	I1025 21:24:38.657305       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:24:38.657332       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [8b6cc6e02caaa8726d8d9b6f112aaee9a06b5a1bf62567b7c64b6b08c9d2fee2] <==
	* I1025 21:21:07.257694       1 controller.go:86] Starting OpenAPI controller
	E1025 21:21:07.259284       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1025 21:21:07.357121       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 21:21:07.357164       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 21:21:07.357169       1 cache.go:39] Caches are synced for autoregister controller
	I1025 21:21:07.426539       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1025 21:21:07.426586       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1025 21:21:08.256323       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1025 21:21:08.256441       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1025 21:21:08.260538       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1025 21:21:08.263234       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1025 21:21:08.263255       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1025 21:21:08.596432       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 21:21:08.627358       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1025 21:21:08.761697       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1025 21:21:08.762616       1 controller.go:609] quota admission added evaluator for: endpoints
	I1025 21:21:08.765414       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 21:21:09.534942       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1025 21:21:10.226859       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1025 21:21:10.382113       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1025 21:21:10.527090       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 21:21:25.551666       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1025 21:21:25.564554       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1025 21:21:39.686987       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1025 21:22:00.270973       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [41e614bb5aac5bcb53cbdc1be87b1afaef3e8aa15e08cd587a348fda82a25073] <==
	* I1025 21:21:25.570673       1 shared_informer.go:230] Caches are synced for GC 
	I1025 21:21:25.572661       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"49c155d0-d4dd-442b-9395-2980c429275c", APIVersion:"apps/v1", ResourceVersion:"220", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-d7mkv
	I1025 21:21:25.572690       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"4df4138f-259e-480e-a843-16ffdd4223e1", APIVersion:"apps/v1", ResourceVersion:"241", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-hz6fr
	E1025 21:21:25.585484       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"49c155d0-d4dd-442b-9395-2980c429275c", ResourceVersion:"220", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63833865670, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0014a9140), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc0014a9160)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0014a9180), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000a400c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc0014a91a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014a91c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014a9200)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0009e1c70), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000eb8628), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00093e0e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000d4158)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000eb8678)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1025 21:21:25.633834       1 shared_informer.go:230] Caches are synced for service account 
	I1025 21:21:25.636693       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1025 21:21:25.636716       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 21:21:25.648434       1 shared_informer.go:230] Caches are synced for namespace 
	I1025 21:21:25.674792       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1025 21:21:25.771126       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"647614aa-49fd-4de6-a404-a701532dd610", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1025 21:21:25.783583       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"72f0cae2-166d-49f4-810a-eab566baa698", APIVersion:"apps/v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-jkcwt
	I1025 21:21:26.126522       1 request.go:621] Throttling request took 1.09308659s, request: GET:https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	I1025 21:21:26.683792       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1025 21:21:26.683831       1 shared_informer.go:230] Caches are synced for resource quota 
	I1025 21:21:35.484104       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1025 21:21:39.679273       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"85d00a7e-c0de-4112-bad1-1327f9fc2bf1", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1025 21:21:39.685031       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"7a767414-4ee3-467f-8ad5-ca9fea3bd623", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-l9mjq
	I1025 21:21:39.733843       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"4568ed30-4aad-4f86-9258-083489d15f04", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-wbx4x
	I1025 21:21:39.744163       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f93e7f63-5691-49aa-9bfc-4ce0c9816d22", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-wft2n
	I1025 21:21:42.603318       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"4568ed30-4aad-4f86-9258-083489d15f04", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1025 21:21:42.612201       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f93e7f63-5691-49aa-9bfc-4ce0c9816d22", APIVersion:"batch/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1025 21:24:20.741590       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"aa6e5844-90d0-4491-bef8-84f79d3593ec", APIVersion:"apps/v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1025 21:24:20.746790       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"161ee37f-5c6a-4f4b-b125-03e53ab262ba", APIVersion:"apps/v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-7grvr
	E1025 21:24:42.454211       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-jhvkz" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [280e769815f26685fac116f257839324166e0750fe672954475d20f26179a626] <==
	* W1025 21:21:26.461554       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1025 21:21:26.467664       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1025 21:21:26.467690       1 server_others.go:186] Using iptables Proxier.
	I1025 21:21:26.467914       1 server.go:583] Version: v1.18.20
	I1025 21:21:26.468352       1 config.go:133] Starting endpoints config controller
	I1025 21:21:26.468435       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1025 21:21:26.468643       1 config.go:315] Starting service config controller
	I1025 21:21:26.468668       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1025 21:21:26.568631       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1025 21:21:26.568836       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [986afb2c60cee16f61993cb71dd7e942457ad60d5e47aca0767398e9507d7fb8] <==
	* W1025 21:21:07.274129       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 21:21:07.339054       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1025 21:21:07.339081       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1025 21:21:07.341090       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 21:21:07.341113       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 21:21:07.341531       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1025 21:21:07.341989       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1025 21:21:07.342545       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 21:21:07.343067       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 21:21:07.343585       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 21:21:07.343583       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 21:21:07.343659       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 21:21:07.343776       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 21:21:07.343849       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 21:21:07.343939       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 21:21:07.344171       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:21:07.344279       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 21:21:07.344397       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 21:21:07.344423       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:21:08.235391       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 21:21:08.284982       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 21:21:08.289998       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 21:21:08.435468       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 21:21:08.627084       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1025 21:21:11.141328       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Oct 25 21:24:12 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:12.550611    1874 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 25 21:24:12 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:12.550645    1874 pod_workers.go:191] Error syncing pod d9eaa29f-e82e-4069-9b56-2d23062bdf42 ("kube-ingress-dns-minikube_kube-system(d9eaa29f-e82e-4069-9b56-2d23062bdf42)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 25 21:24:20 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:20.750789    1874 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 25 21:24:20 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:20.893616    1874 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-bwlvf" (UniqueName: "kubernetes.io/secret/429375b0-9c3d-4d74-8842-fa01bc3bc325-default-token-bwlvf") pod "hello-world-app-5f5d8b66bb-7grvr" (UID: "429375b0-9c3d-4d74-8842-fa01bc3bc325")
	Oct 25 21:24:21 ingress-addon-legacy-620621 kubelet[1874]: W1025 21:24:21.147107    1874 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/1a7ddd86d6c4a01989da477c73ab630aa0fec24eeb0e7dbdc438f064ae299440/crio-e254575fa71cbcf4bee567eae859c41302e6c98a9593f0f6c84991fb275fb256 WatchSource:0}: Error finding container e254575fa71cbcf4bee567eae859c41302e6c98a9593f0f6c84991fb275fb256: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000ce6a00 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Oct 25 21:24:24 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:24.550386    1874 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 25 21:24:24 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:24.550430    1874 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 25 21:24:24 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:24.550486    1874 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 25 21:24:24 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:24.550519    1874 pod_workers.go:191] Error syncing pod d9eaa29f-e82e-4069-9b56-2d23062bdf42 ("kube-ingress-dns-minikube_kube-system(d9eaa29f-e82e-4069-9b56-2d23062bdf42)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 25 21:24:35 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:35.550304    1874 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 25 21:24:35 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:35.550358    1874 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 25 21:24:35 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:35.550406    1874 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 25 21:24:35 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:35.550438    1874 pod_workers.go:191] Error syncing pod d9eaa29f-e82e-4069-9b56-2d23062bdf42 ("kube-ingress-dns-minikube_kube-system(d9eaa29f-e82e-4069-9b56-2d23062bdf42)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 25 21:24:36 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:36.564356    1874 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-8mvh8" (UniqueName: "kubernetes.io/secret/d9eaa29f-e82e-4069-9b56-2d23062bdf42-minikube-ingress-dns-token-8mvh8") pod "d9eaa29f-e82e-4069-9b56-2d23062bdf42" (UID: "d9eaa29f-e82e-4069-9b56-2d23062bdf42")
	Oct 25 21:24:36 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:36.566165    1874 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9eaa29f-e82e-4069-9b56-2d23062bdf42-minikube-ingress-dns-token-8mvh8" (OuterVolumeSpecName: "minikube-ingress-dns-token-8mvh8") pod "d9eaa29f-e82e-4069-9b56-2d23062bdf42" (UID: "d9eaa29f-e82e-4069-9b56-2d23062bdf42"). InnerVolumeSpecName "minikube-ingress-dns-token-8mvh8". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:24:36 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:36.664650    1874 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-8mvh8" (UniqueName: "kubernetes.io/secret/d9eaa29f-e82e-4069-9b56-2d23062bdf42-minikube-ingress-dns-token-8mvh8") on node "ingress-addon-legacy-620621" DevicePath ""
	Oct 25 21:24:37 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:37.718091    1874 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-l9mjq.179176ba7655000d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-l9mjq", UID:"59597fb3-6446-41d7-8554-d4789df618b0", APIVersion:"v1", ResourceVersion:"475", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-620621"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1467fc56ab76e0d, ext:207520045448, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1467fc56ab76e0d, ext:207520045448, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-l9mjq.179176ba7655000d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 25 21:24:37 ingress-addon-legacy-620621 kubelet[1874]: E1025 21:24:37.722783    1874 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-l9mjq.179176ba7655000d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-l9mjq", UID:"59597fb3-6446-41d7-8554-d4789df618b0", APIVersion:"v1", ResourceVersion:"475", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-620621"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1467fc56ab76e0d, ext:207520045448, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1467fc56ae56b58, ext:207523059410, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-l9mjq.179176ba7655000d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 25 21:24:39 ingress-addon-legacy-620621 kubelet[1874]: W1025 21:24:39.956994    1874 pod_container_deletor.go:77] Container "50339ea9c48a19a9a4bf0cf0d723559c89b3f12b9c0e1385a252a78ececab666" not found in pod's containers
	Oct 25 21:24:40 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:40.573704    1874 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-vwpcc" (UniqueName: "kubernetes.io/secret/59597fb3-6446-41d7-8554-d4789df618b0-ingress-nginx-token-vwpcc") pod "59597fb3-6446-41d7-8554-d4789df618b0" (UID: "59597fb3-6446-41d7-8554-d4789df618b0")
	Oct 25 21:24:40 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:40.573776    1874 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/59597fb3-6446-41d7-8554-d4789df618b0-webhook-cert") pod "59597fb3-6446-41d7-8554-d4789df618b0" (UID: "59597fb3-6446-41d7-8554-d4789df618b0")
	Oct 25 21:24:40 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:40.575641    1874 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59597fb3-6446-41d7-8554-d4789df618b0-ingress-nginx-token-vwpcc" (OuterVolumeSpecName: "ingress-nginx-token-vwpcc") pod "59597fb3-6446-41d7-8554-d4789df618b0" (UID: "59597fb3-6446-41d7-8554-d4789df618b0"). InnerVolumeSpecName "ingress-nginx-token-vwpcc". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:24:40 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:40.575964    1874 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59597fb3-6446-41d7-8554-d4789df618b0-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "59597fb3-6446-41d7-8554-d4789df618b0" (UID: "59597fb3-6446-41d7-8554-d4789df618b0"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:24:40 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:40.674050    1874 reconciler.go:319] Volume detached for volume "ingress-nginx-token-vwpcc" (UniqueName: "kubernetes.io/secret/59597fb3-6446-41d7-8554-d4789df618b0-ingress-nginx-token-vwpcc") on node "ingress-addon-legacy-620621" DevicePath ""
	Oct 25 21:24:40 ingress-addon-legacy-620621 kubelet[1874]: I1025 21:24:40.674083    1874 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/59597fb3-6446-41d7-8554-d4789df618b0-webhook-cert") on node "ingress-addon-legacy-620621" DevicePath ""
	
	* 
	* ==> storage-provisioner [45130f0e913adbdef447e5cf05f2311440998ff9cc83e1f671d7786260224bc8] <==
	* I1025 21:21:32.092846       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 21:21:32.100025       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 21:21:32.100074       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 21:21:32.141101       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 21:21:32.141188       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dfb096a3-ee2e-47a1-8c34-495fa626c1e7", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-620621_05b0eb08-8c00-45a5-9251-0bb8d0ce46ee became leader
	I1025 21:21:32.141243       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-620621_05b0eb08-8c00-45a5-9251-0bb8d0ce46ee!
	I1025 21:21:32.241869       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-620621_05b0eb08-8c00-45a5-9251-0bb8d0ce46ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-620621 -n ingress-addon-legacy-620621
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-620621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (175.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-2z62q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-2z62q -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-2z62q -- sh -c "ping -c 1 192.168.58.1": exit status 1 (183.478384ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-2z62q): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-xh8tr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-xh8tr -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-xh8tr -- sh -c "ping -c 1 192.168.58.1": exit status 1 (175.238149ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-xh8tr): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-874778
helpers_test.go:235: (dbg) docker inspect multinode-874778:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b",
	        "Created": "2023-10-25T21:29:46.519742993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 105722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-25T21:29:46.793378297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/hostname",
	        "HostsPath": "/var/lib/docker/containers/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/hosts",
	        "LogPath": "/var/lib/docker/containers/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b-json.log",
	        "Name": "/multinode-874778",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-874778:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-874778",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/24f7ae10561483bca0cc67b1fe6039915bbbc3c5ff87d56dfb91cd621f54a20b-init/diff:/var/lib/docker/overlay2/08f48c2099646ae35740a1c0f07609c9eefd4a79bbbda6d2c067385f70ad62be/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24f7ae10561483bca0cc67b1fe6039915bbbc3c5ff87d56dfb91cd621f54a20b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24f7ae10561483bca0cc67b1fe6039915bbbc3c5ff87d56dfb91cd621f54a20b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24f7ae10561483bca0cc67b1fe6039915bbbc3c5ff87d56dfb91cd621f54a20b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-874778",
	                "Source": "/var/lib/docker/volumes/multinode-874778/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-874778",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-874778",
	                "name.minikube.sigs.k8s.io": "multinode-874778",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7018bed285789c2131e99f2d71ca3bc556508b4b4b8cc923677b7923ed0598a8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7018bed28578",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-874778": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0862499eed10",
	                        "multinode-874778"
	                    ],
	                    "NetworkID": "0dc19cf164c5769b45f1121ad336238f08690f6a880fef317676eb77b1f48632",
	                    "EndpointID": "a501d5957b3b37d231ced6fe5a6e249f942ffdf509ec09539cf26a313058c762",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-874778 -n multinode-874778
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-874778 logs -n 25: (1.156229529s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-061439                           | mount-start-2-061439 | jenkins | v1.31.2 | 25 Oct 23 21:29 UTC | 25 Oct 23 21:29 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-061439 ssh -- ls                    | mount-start-2-061439 | jenkins | v1.31.2 | 25 Oct 23 21:29 UTC | 25 Oct 23 21:29 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-047570                           | mount-start-1-047570 | jenkins | v1.31.2 | 25 Oct 23 21:29 UTC | 25 Oct 23 21:29 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-061439 ssh -- ls                    | mount-start-2-061439 | jenkins | v1.31.2 | 25 Oct 23 21:29 UTC | 25 Oct 23 21:29 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-061439                           | mount-start-2-061439 | jenkins | v1.31.2 | 25 Oct 23 21:29 UTC | 25 Oct 23 21:29 UTC |
	| start   | -p mount-start-2-061439                           | mount-start-2-061439 | jenkins | v1.31.2 | 25 Oct 23 21:29 UTC | 25 Oct 23 21:29 UTC |
	| ssh     | mount-start-2-061439 ssh -- ls                    | mount-start-2-061439 | jenkins | v1.31.2 | 25 Oct 23 21:29 UTC | 25 Oct 23 21:29 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-061439                           | mount-start-2-061439 | jenkins | v1.31.2 | 25 Oct 23 21:29 UTC | 25 Oct 23 21:29 UTC |
	| delete  | -p mount-start-1-047570                           | mount-start-1-047570 | jenkins | v1.31.2 | 25 Oct 23 21:29 UTC | 25 Oct 23 21:29 UTC |
	| start   | -p multinode-874778                               | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:29 UTC | 25 Oct 23 21:31 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- apply -f                   | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- rollout                    | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- get pods -o                | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- get pods -o                | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- exec                       | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | busybox-5bc68d56bd-2z62q --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- exec                       | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | busybox-5bc68d56bd-xh8tr --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- exec                       | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | busybox-5bc68d56bd-2z62q --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- exec                       | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | busybox-5bc68d56bd-xh8tr --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- exec                       | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | busybox-5bc68d56bd-2z62q -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- exec                       | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | busybox-5bc68d56bd-xh8tr -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- get pods -o                | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- exec                       | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | busybox-5bc68d56bd-2z62q                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- exec                       | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC |                     |
	|         | busybox-5bc68d56bd-2z62q -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- exec                       | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC | 25 Oct 23 21:31 UTC |
	|         | busybox-5bc68d56bd-xh8tr                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-874778 -- exec                       | multinode-874778     | jenkins | v1.31.2 | 25 Oct 23 21:31 UTC |                     |
	|         | busybox-5bc68d56bd-xh8tr -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:29:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:29:40.652998  105113 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:29:40.653122  105113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:29:40.653131  105113 out.go:309] Setting ErrFile to fd 2...
	I1025 21:29:40.653136  105113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:29:40.653313  105113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:29:40.653872  105113 out.go:303] Setting JSON to false
	I1025 21:29:40.655014  105113 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4330,"bootTime":1698265051,"procs":492,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:29:40.655074  105113 start.go:138] virtualization: kvm guest
	I1025 21:29:40.657369  105113 out.go:177] * [multinode-874778] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:29:40.659403  105113 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:29:40.659467  105113 notify.go:220] Checking for updates...
	I1025 21:29:40.661060  105113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:29:40.662701  105113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:29:40.664211  105113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:29:40.665692  105113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:29:40.667271  105113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:29:40.668904  105113 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:29:40.690712  105113 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:29:40.690810  105113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:29:40.742398  105113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-25 21:29:40.734087996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:29:40.742543  105113 docker.go:295] overlay module found
	I1025 21:29:40.744780  105113 out.go:177] * Using the docker driver based on user configuration
	I1025 21:29:40.746580  105113 start.go:298] selected driver: docker
	I1025 21:29:40.746598  105113 start.go:902] validating driver "docker" against <nil>
	I1025 21:29:40.746610  105113 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:29:40.747375  105113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:29:40.797109  105113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-25 21:29:40.789325316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:29:40.797321  105113 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 21:29:40.797590  105113 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:29:40.799572  105113 out.go:177] * Using Docker driver with root privileges
	I1025 21:29:40.801227  105113 cni.go:84] Creating CNI manager for ""
	I1025 21:29:40.801256  105113 cni.go:136] 0 nodes found, recommending kindnet
	I1025 21:29:40.801269  105113 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 21:29:40.801288  105113 start_flags.go:323] config:
	{Name:multinode-874778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-874778 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:29:40.803096  105113 out.go:177] * Starting control plane node multinode-874778 in cluster multinode-874778
	I1025 21:29:40.804444  105113 cache.go:121] Beginning downloading kic base image for docker with crio
	I1025 21:29:40.805913  105113 out.go:177] * Pulling base image ...
	I1025 21:29:40.807266  105113 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:29:40.807290  105113 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:29:40.807309  105113 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1025 21:29:40.807315  105113 cache.go:56] Caching tarball of preloaded images
	I1025 21:29:40.807428  105113 preload.go:174] Found /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 21:29:40.807441  105113 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1025 21:29:40.807753  105113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/config.json ...
	I1025 21:29:40.807774  105113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/config.json: {Name:mk434dae57f2b7f2a6650f6b3e7d8892c30663ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:29:40.822615  105113 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 21:29:40.822638  105113 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 21:29:40.822658  105113 cache.go:194] Successfully downloaded all kic artifacts
	I1025 21:29:40.822703  105113 start.go:365] acquiring machines lock for multinode-874778: {Name:mk07fbf3de3e777b0381e8bca999a655bb5e5540 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:29:40.822805  105113 start.go:369] acquired machines lock for "multinode-874778" in 80.421µs
	I1025 21:29:40.822830  105113 start.go:93] Provisioning new machine with config: &{Name:multinode-874778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-874778 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:29:40.823061  105113 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:29:40.825081  105113 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:29:40.825380  105113 start.go:159] libmachine.API.Create for "multinode-874778" (driver="docker")
	I1025 21:29:40.825422  105113 client.go:168] LocalClient.Create starting
	I1025 21:29:40.825484  105113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem
	I1025 21:29:40.825527  105113 main.go:141] libmachine: Decoding PEM data...
	I1025 21:29:40.825550  105113 main.go:141] libmachine: Parsing certificate...
	I1025 21:29:40.825615  105113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem
	I1025 21:29:40.825645  105113 main.go:141] libmachine: Decoding PEM data...
	I1025 21:29:40.825663  105113 main.go:141] libmachine: Parsing certificate...
	I1025 21:29:40.826073  105113 cli_runner.go:164] Run: docker network inspect multinode-874778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:29:40.841271  105113 cli_runner.go:211] docker network inspect multinode-874778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:29:40.841319  105113 network_create.go:281] running [docker network inspect multinode-874778] to gather additional debugging logs...
	I1025 21:29:40.841331  105113 cli_runner.go:164] Run: docker network inspect multinode-874778
	W1025 21:29:40.856749  105113 cli_runner.go:211] docker network inspect multinode-874778 returned with exit code 1
	I1025 21:29:40.856775  105113 network_create.go:284] error running [docker network inspect multinode-874778]: docker network inspect multinode-874778: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-874778 not found
	I1025 21:29:40.856786  105113 network_create.go:286] output of [docker network inspect multinode-874778]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-874778 not found
	
	** /stderr **
	I1025 21:29:40.856860  105113 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:29:40.872543  105113 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0e4ec9dfdced IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f0:41:d0:e9} reservation:<nil>}
	I1025 21:29:40.873005  105113 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00058fba0}
	I1025 21:29:40.873027  105113 network_create.go:124] attempt to create docker network multinode-874778 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 21:29:40.873082  105113 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-874778 multinode-874778
	I1025 21:29:40.925221  105113 network_create.go:108] docker network multinode-874778 192.168.58.0/24 created
	I1025 21:29:40.925258  105113 kic.go:118] calculated static IP "192.168.58.2" for the "multinode-874778" container
	I1025 21:29:40.925329  105113 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:29:40.940616  105113 cli_runner.go:164] Run: docker volume create multinode-874778 --label name.minikube.sigs.k8s.io=multinode-874778 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:29:40.957811  105113 oci.go:103] Successfully created a docker volume multinode-874778
	I1025 21:29:40.957895  105113 cli_runner.go:164] Run: docker run --rm --name multinode-874778-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-874778 --entrypoint /usr/bin/test -v multinode-874778:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 21:29:41.441990  105113 oci.go:107] Successfully prepared a docker volume multinode-874778
	I1025 21:29:41.442035  105113 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:29:41.442058  105113 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 21:29:41.442117  105113 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-874778:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 21:29:46.455225  105113 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-874778:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.013033708s)
	I1025 21:29:46.455263  105113 kic.go:200] duration metric: took 5.013202 seconds to extract preloaded images to volume
	W1025 21:29:46.455438  105113 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 21:29:46.455536  105113 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 21:29:46.505941  105113 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-874778 --name multinode-874778 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-874778 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-874778 --network multinode-874778 --ip 192.168.58.2 --volume multinode-874778:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 21:29:46.801430  105113 cli_runner.go:164] Run: docker container inspect multinode-874778 --format={{.State.Running}}
	I1025 21:29:46.820684  105113 cli_runner.go:164] Run: docker container inspect multinode-874778 --format={{.State.Status}}
	I1025 21:29:46.838604  105113 cli_runner.go:164] Run: docker exec multinode-874778 stat /var/lib/dpkg/alternatives/iptables
	I1025 21:29:46.876822  105113 oci.go:144] the created container "multinode-874778" has a running status.
	I1025 21:29:46.876871  105113 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa...
	I1025 21:29:47.092323  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 21:29:47.092368  105113 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 21:29:47.113710  105113 cli_runner.go:164] Run: docker container inspect multinode-874778 --format={{.State.Status}}
	I1025 21:29:47.130633  105113 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 21:29:47.130659  105113 kic_runner.go:114] Args: [docker exec --privileged multinode-874778 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 21:29:47.237169  105113 cli_runner.go:164] Run: docker container inspect multinode-874778 --format={{.State.Status}}
	I1025 21:29:47.254229  105113 machine.go:88] provisioning docker machine ...
	I1025 21:29:47.254276  105113 ubuntu.go:169] provisioning hostname "multinode-874778"
	I1025 21:29:47.254351  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:29:47.275956  105113 main.go:141] libmachine: Using SSH client type: native
	I1025 21:29:47.276521  105113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1025 21:29:47.276548  105113 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-874778 && echo "multinode-874778" | sudo tee /etc/hostname
	I1025 21:29:47.487716  105113 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-874778
	
	I1025 21:29:47.487810  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:29:47.505146  105113 main.go:141] libmachine: Using SSH client type: native
	I1025 21:29:47.505652  105113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1025 21:29:47.505686  105113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-874778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-874778/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-874778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:29:47.625949  105113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:29:47.625979  105113 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17488-11542/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-11542/.minikube}
	I1025 21:29:47.626015  105113 ubuntu.go:177] setting up certificates
	I1025 21:29:47.626025  105113 provision.go:83] configureAuth start
	I1025 21:29:47.626080  105113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-874778
	I1025 21:29:47.641796  105113 provision.go:138] copyHostCerts
	I1025 21:29:47.641840  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem
	I1025 21:29:47.641866  105113 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem, removing ...
	I1025 21:29:47.641876  105113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem
	I1025 21:29:47.641943  105113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem (1078 bytes)
	I1025 21:29:47.642032  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem
	I1025 21:29:47.642054  105113 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem, removing ...
	I1025 21:29:47.642061  105113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem
	I1025 21:29:47.642087  105113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem (1123 bytes)
	I1025 21:29:47.642153  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem
	I1025 21:29:47.642173  105113 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem, removing ...
	I1025 21:29:47.642177  105113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem
	I1025 21:29:47.642201  105113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem (1675 bytes)
	I1025 21:29:47.642251  105113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem org=jenkins.multinode-874778 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-874778]
	I1025 21:29:47.776943  105113 provision.go:172] copyRemoteCerts
	I1025 21:29:47.776997  105113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:29:47.777029  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:29:47.793447  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa Username:docker}
	I1025 21:29:47.878045  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 21:29:47.878104  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 21:29:47.898146  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 21:29:47.898199  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 21:29:47.917892  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 21:29:47.917939  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 21:29:47.937888  105113 provision.go:86] duration metric: configureAuth took 311.852608ms
	I1025 21:29:47.937914  105113 ubuntu.go:193] setting minikube options for container-runtime
	I1025 21:29:47.938091  105113 config.go:182] Loaded profile config "multinode-874778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:29:47.938200  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:29:47.953505  105113 main.go:141] libmachine: Using SSH client type: native
	I1025 21:29:47.953817  105113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1025 21:29:47.953836  105113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 21:29:48.151387  105113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 21:29:48.151409  105113 machine.go:91] provisioned docker machine in 897.158018ms
	I1025 21:29:48.151417  105113 client.go:171] LocalClient.Create took 7.325986241s
	I1025 21:29:48.151435  105113 start.go:167] duration metric: libmachine.API.Create for "multinode-874778" took 7.326056817s
	I1025 21:29:48.151444  105113 start.go:300] post-start starting for "multinode-874778" (driver="docker")
	I1025 21:29:48.151456  105113 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:29:48.151523  105113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:29:48.151565  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:29:48.167175  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa Username:docker}
	I1025 21:29:48.255008  105113 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:29:48.257927  105113 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1025 21:29:48.257951  105113 command_runner.go:130] > NAME="Ubuntu"
	I1025 21:29:48.257971  105113 command_runner.go:130] > VERSION_ID="22.04"
	I1025 21:29:48.257979  105113 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1025 21:29:48.257987  105113 command_runner.go:130] > VERSION_CODENAME=jammy
	I1025 21:29:48.257994  105113 command_runner.go:130] > ID=ubuntu
	I1025 21:29:48.258005  105113 command_runner.go:130] > ID_LIKE=debian
	I1025 21:29:48.258017  105113 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 21:29:48.258029  105113 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 21:29:48.258043  105113 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 21:29:48.258058  105113 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 21:29:48.258069  105113 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1025 21:29:48.258138  105113 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 21:29:48.258180  105113 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 21:29:48.258199  105113 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 21:29:48.258211  105113 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 21:29:48.258223  105113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/addons for local assets ...
	I1025 21:29:48.258310  105113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/files for local assets ...
	I1025 21:29:48.258407  105113 filesync.go:149] local asset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> 183232.pem in /etc/ssl/certs
	I1025 21:29:48.258423  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> /etc/ssl/certs/183232.pem
	I1025 21:29:48.258522  105113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 21:29:48.265895  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem --> /etc/ssl/certs/183232.pem (1708 bytes)
	I1025 21:29:48.286036  105113 start.go:303] post-start completed in 134.577689ms
	I1025 21:29:48.286362  105113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-874778
	I1025 21:29:48.301949  105113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/config.json ...
	I1025 21:29:48.302172  105113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:29:48.302207  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:29:48.318020  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa Username:docker}
	I1025 21:29:48.406753  105113 command_runner.go:130] > 23%!
	(MISSING)I1025 21:29:48.406821  105113 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:29:48.410580  105113 command_runner.go:130] > 225G
	I1025 21:29:48.410756  105113 start.go:128] duration metric: createHost completed in 7.587667404s
	I1025 21:29:48.410777  105113 start.go:83] releasing machines lock for "multinode-874778", held for 7.58795953s
	I1025 21:29:48.410831  105113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-874778
	I1025 21:29:48.426941  105113 ssh_runner.go:195] Run: cat /version.json
	I1025 21:29:48.426988  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:29:48.427042  105113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:29:48.427106  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:29:48.443918  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa Username:docker}
	I1025 21:29:48.444208  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa Username:docker}
	I1025 21:29:48.611068  105113 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 21:29:48.611167  105113 command_runner.go:130] > {"iso_version": "v1.31.0-1697471113-17434", "kicbase_version": "v0.0.40-1698055645-17423", "minikube_version": "v1.31.2", "commit": "585245745aba695f9444ad633713942a6eacd882"}
	I1025 21:29:48.611282  105113 ssh_runner.go:195] Run: systemctl --version
	I1025 21:29:48.615329  105113 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1025 21:29:48.615360  105113 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1025 21:29:48.615423  105113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 21:29:48.751294  105113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 21:29:48.755461  105113 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1025 21:29:48.755504  105113 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1025 21:29:48.755515  105113 command_runner.go:130] > Device: 33h/51d	Inode: 552112      Links: 1
	I1025 21:29:48.755531  105113 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 21:29:48.755544  105113 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1025 21:29:48.755550  105113 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1025 21:29:48.755555  105113 command_runner.go:130] > Change: 2023-10-25 21:11:12.050317769 +0000
	I1025 21:29:48.755561  105113 command_runner.go:130] >  Birth: 2023-10-25 21:11:12.050317769 +0000
	I1025 21:29:48.755615  105113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:29:48.772754  105113 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1025 21:29:48.772833  105113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:29:48.799038  105113 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1025 21:29:48.799099  105113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 21:29:48.799112  105113 start.go:472] detecting cgroup driver to use...
	I1025 21:29:48.799149  105113 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 21:29:48.799201  105113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 21:29:48.812184  105113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:29:48.821462  105113 docker.go:198] disabling cri-docker service (if available) ...
	I1025 21:29:48.821500  105113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:29:48.832515  105113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:29:48.845124  105113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 21:29:48.923944  105113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:29:48.936358  105113 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1025 21:29:49.003008  105113 docker.go:214] disabling docker service ...
	I1025 21:29:49.003057  105113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:29:49.019263  105113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:29:49.028924  105113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:29:49.099420  105113 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1025 21:29:49.099490  105113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:29:49.175648  105113 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1025 21:29:49.175716  105113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:29:49.185114  105113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:29:49.199575  105113 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1025 21:29:49.199617  105113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 21:29:49.199665  105113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:29:49.208916  105113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 21:29:49.208989  105113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:29:49.217764  105113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:29:49.226241  105113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:29:49.235227  105113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 21:29:49.243438  105113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 21:29:49.250440  105113 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1025 21:29:49.251150  105113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 21:29:49.258623  105113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:29:49.319108  105113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 21:29:49.418386  105113 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 21:29:49.418454  105113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 21:29:49.421635  105113 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1025 21:29:49.421657  105113 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 21:29:49.421671  105113 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1025 21:29:49.421686  105113 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 21:29:49.421698  105113 command_runner.go:130] > Access: 2023-10-25 21:29:49.404140866 +0000
	I1025 21:29:49.421712  105113 command_runner.go:130] > Modify: 2023-10-25 21:29:49.404140866 +0000
	I1025 21:29:49.421725  105113 command_runner.go:130] > Change: 2023-10-25 21:29:49.404140866 +0000
	I1025 21:29:49.421735  105113 command_runner.go:130] >  Birth: -
	I1025 21:29:49.421756  105113 start.go:540] Will wait 60s for crictl version
	I1025 21:29:49.421796  105113 ssh_runner.go:195] Run: which crictl
	I1025 21:29:49.424677  105113 command_runner.go:130] > /usr/bin/crictl
	I1025 21:29:49.424733  105113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 21:29:49.453793  105113 command_runner.go:130] > Version:  0.1.0
	I1025 21:29:49.453816  105113 command_runner.go:130] > RuntimeName:  cri-o
	I1025 21:29:49.453825  105113 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1025 21:29:49.453833  105113 command_runner.go:130] > RuntimeApiVersion:  v1
	I1025 21:29:49.455483  105113 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1025 21:29:49.455559  105113 ssh_runner.go:195] Run: crio --version
	I1025 21:29:49.485331  105113 command_runner.go:130] > crio version 1.24.6
	I1025 21:29:49.485357  105113 command_runner.go:130] > Version:          1.24.6
	I1025 21:29:49.485371  105113 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1025 21:29:49.485376  105113 command_runner.go:130] > GitTreeState:     clean
	I1025 21:29:49.485382  105113 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1025 21:29:49.485389  105113 command_runner.go:130] > GoVersion:        go1.18.2
	I1025 21:29:49.485395  105113 command_runner.go:130] > Compiler:         gc
	I1025 21:29:49.485402  105113 command_runner.go:130] > Platform:         linux/amd64
	I1025 21:29:49.485416  105113 command_runner.go:130] > Linkmode:         dynamic
	I1025 21:29:49.485432  105113 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1025 21:29:49.485443  105113 command_runner.go:130] > SeccompEnabled:   true
	I1025 21:29:49.485453  105113 command_runner.go:130] > AppArmorEnabled:  false
	I1025 21:29:49.486841  105113 ssh_runner.go:195] Run: crio --version
	I1025 21:29:49.517686  105113 command_runner.go:130] > crio version 1.24.6
	I1025 21:29:49.517705  105113 command_runner.go:130] > Version:          1.24.6
	I1025 21:29:49.517711  105113 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1025 21:29:49.517716  105113 command_runner.go:130] > GitTreeState:     clean
	I1025 21:29:49.517721  105113 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1025 21:29:49.517729  105113 command_runner.go:130] > GoVersion:        go1.18.2
	I1025 21:29:49.517734  105113 command_runner.go:130] > Compiler:         gc
	I1025 21:29:49.517738  105113 command_runner.go:130] > Platform:         linux/amd64
	I1025 21:29:49.517743  105113 command_runner.go:130] > Linkmode:         dynamic
	I1025 21:29:49.517757  105113 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1025 21:29:49.517764  105113 command_runner.go:130] > SeccompEnabled:   true
	I1025 21:29:49.517768  105113 command_runner.go:130] > AppArmorEnabled:  false
	I1025 21:29:49.520627  105113 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1025 21:29:49.522107  105113 cli_runner.go:164] Run: docker network inspect multinode-874778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:29:49.537998  105113 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1025 21:29:49.541275  105113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:29:49.550870  105113 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:29:49.550928  105113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:29:49.600488  105113 command_runner.go:130] > {
	I1025 21:29:49.600513  105113 command_runner.go:130] >   "images": [
	I1025 21:29:49.600521  105113 command_runner.go:130] >     {
	I1025 21:29:49.600534  105113 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1025 21:29:49.600541  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.600548  105113 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1025 21:29:49.600552  105113 command_runner.go:130] >       ],
	I1025 21:29:49.600557  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.600565  105113 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1025 21:29:49.600573  105113 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1025 21:29:49.600577  105113 command_runner.go:130] >       ],
	I1025 21:29:49.600582  105113 command_runner.go:130] >       "size": "65258016",
	I1025 21:29:49.600590  105113 command_runner.go:130] >       "uid": null,
	I1025 21:29:49.600603  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.600615  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.600620  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.600624  105113 command_runner.go:130] >     },
	I1025 21:29:49.600629  105113 command_runner.go:130] >     {
	I1025 21:29:49.600635  105113 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1025 21:29:49.600642  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.600647  105113 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1025 21:29:49.600651  105113 command_runner.go:130] >       ],
	I1025 21:29:49.600656  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.600664  105113 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1025 21:29:49.600671  105113 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1025 21:29:49.600677  105113 command_runner.go:130] >       ],
	I1025 21:29:49.600686  105113 command_runner.go:130] >       "size": "31470524",
	I1025 21:29:49.600693  105113 command_runner.go:130] >       "uid": null,
	I1025 21:29:49.600697  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.600703  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.600707  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.600715  105113 command_runner.go:130] >     },
	I1025 21:29:49.600719  105113 command_runner.go:130] >     {
	I1025 21:29:49.600725  105113 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1025 21:29:49.600732  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.600737  105113 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1025 21:29:49.600743  105113 command_runner.go:130] >       ],
	I1025 21:29:49.600747  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.600758  105113 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1025 21:29:49.600765  105113 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1025 21:29:49.600771  105113 command_runner.go:130] >       ],
	I1025 21:29:49.600775  105113 command_runner.go:130] >       "size": "53621675",
	I1025 21:29:49.600782  105113 command_runner.go:130] >       "uid": null,
	I1025 21:29:49.600786  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.600794  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.600798  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.600802  105113 command_runner.go:130] >     },
	I1025 21:29:49.600805  105113 command_runner.go:130] >     {
	I1025 21:29:49.600812  105113 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1025 21:29:49.600821  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.600826  105113 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1025 21:29:49.600832  105113 command_runner.go:130] >       ],
	I1025 21:29:49.600836  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.600843  105113 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1025 21:29:49.600851  105113 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1025 21:29:49.600863  105113 command_runner.go:130] >       ],
	I1025 21:29:49.600868  105113 command_runner.go:130] >       "size": "295456551",
	I1025 21:29:49.600874  105113 command_runner.go:130] >       "uid": {
	I1025 21:29:49.600878  105113 command_runner.go:130] >         "value": "0"
	I1025 21:29:49.600884  105113 command_runner.go:130] >       },
	I1025 21:29:49.600889  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.600895  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.600905  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.600911  105113 command_runner.go:130] >     },
	I1025 21:29:49.600914  105113 command_runner.go:130] >     {
	I1025 21:29:49.600925  105113 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1025 21:29:49.600930  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.600940  105113 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1025 21:29:49.600946  105113 command_runner.go:130] >       ],
	I1025 21:29:49.600950  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.600960  105113 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1025 21:29:49.600967  105113 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1025 21:29:49.600973  105113 command_runner.go:130] >       ],
	I1025 21:29:49.600980  105113 command_runner.go:130] >       "size": "127165392",
	I1025 21:29:49.600984  105113 command_runner.go:130] >       "uid": {
	I1025 21:29:49.600988  105113 command_runner.go:130] >         "value": "0"
	I1025 21:29:49.600994  105113 command_runner.go:130] >       },
	I1025 21:29:49.600998  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.601004  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.601009  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.601014  105113 command_runner.go:130] >     },
	I1025 21:29:49.601018  105113 command_runner.go:130] >     {
	I1025 21:29:49.601024  105113 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1025 21:29:49.601031  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.601036  105113 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1025 21:29:49.601044  105113 command_runner.go:130] >       ],
	I1025 21:29:49.601049  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.601059  105113 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1025 21:29:49.601068  105113 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1025 21:29:49.601074  105113 command_runner.go:130] >       ],
	I1025 21:29:49.601078  105113 command_runner.go:130] >       "size": "123188534",
	I1025 21:29:49.601085  105113 command_runner.go:130] >       "uid": {
	I1025 21:29:49.601089  105113 command_runner.go:130] >         "value": "0"
	I1025 21:29:49.601094  105113 command_runner.go:130] >       },
	I1025 21:29:49.601099  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.601103  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.601108  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.601114  105113 command_runner.go:130] >     },
	I1025 21:29:49.601117  105113 command_runner.go:130] >     {
	I1025 21:29:49.601123  105113 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1025 21:29:49.601130  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.601137  105113 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1025 21:29:49.601142  105113 command_runner.go:130] >       ],
	I1025 21:29:49.601149  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.601158  105113 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1025 21:29:49.601167  105113 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1025 21:29:49.601171  105113 command_runner.go:130] >       ],
	I1025 21:29:49.601175  105113 command_runner.go:130] >       "size": "74691991",
	I1025 21:29:49.601181  105113 command_runner.go:130] >       "uid": null,
	I1025 21:29:49.601185  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.601191  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.601195  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.601200  105113 command_runner.go:130] >     },
	I1025 21:29:49.601203  105113 command_runner.go:130] >     {
	I1025 21:29:49.601212  105113 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1025 21:29:49.601216  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.601224  105113 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1025 21:29:49.601227  105113 command_runner.go:130] >       ],
	I1025 21:29:49.601232  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.601281  105113 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1025 21:29:49.601297  105113 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1025 21:29:49.601308  105113 command_runner.go:130] >       ],
	I1025 21:29:49.601319  105113 command_runner.go:130] >       "size": "61498678",
	I1025 21:29:49.601324  105113 command_runner.go:130] >       "uid": {
	I1025 21:29:49.601331  105113 command_runner.go:130] >         "value": "0"
	I1025 21:29:49.601335  105113 command_runner.go:130] >       },
	I1025 21:29:49.601340  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.601344  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.601350  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.601354  105113 command_runner.go:130] >     },
	I1025 21:29:49.601360  105113 command_runner.go:130] >     {
	I1025 21:29:49.601366  105113 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1025 21:29:49.601372  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.601377  105113 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1025 21:29:49.601383  105113 command_runner.go:130] >       ],
	I1025 21:29:49.601387  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.601394  105113 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1025 21:29:49.601404  105113 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1025 21:29:49.601407  105113 command_runner.go:130] >       ],
	I1025 21:29:49.601414  105113 command_runner.go:130] >       "size": "750414",
	I1025 21:29:49.601421  105113 command_runner.go:130] >       "uid": {
	I1025 21:29:49.601425  105113 command_runner.go:130] >         "value": "65535"
	I1025 21:29:49.601431  105113 command_runner.go:130] >       },
	I1025 21:29:49.601435  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.601439  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.601445  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.601449  105113 command_runner.go:130] >     }
	I1025 21:29:49.601455  105113 command_runner.go:130] >   ]
	I1025 21:29:49.601458  105113 command_runner.go:130] > }
	I1025 21:29:49.602773  105113 crio.go:496] all images are preloaded for cri-o runtime.
	I1025 21:29:49.602793  105113 crio.go:415] Images already preloaded, skipping extraction
	I1025 21:29:49.602839  105113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:29:49.633960  105113 command_runner.go:130] > {
	I1025 21:29:49.633980  105113 command_runner.go:130] >   "images": [
	I1025 21:29:49.633986  105113 command_runner.go:130] >     {
	I1025 21:29:49.633999  105113 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1025 21:29:49.634006  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.634021  105113 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1025 21:29:49.634032  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634039  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.634056  105113 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1025 21:29:49.634067  105113 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1025 21:29:49.634072  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634078  105113 command_runner.go:130] >       "size": "65258016",
	I1025 21:29:49.634084  105113 command_runner.go:130] >       "uid": null,
	I1025 21:29:49.634089  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.634099  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.634106  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.634110  105113 command_runner.go:130] >     },
	I1025 21:29:49.634116  105113 command_runner.go:130] >     {
	I1025 21:29:49.634122  105113 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1025 21:29:49.634128  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.634134  105113 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1025 21:29:49.634138  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634142  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.634150  105113 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1025 21:29:49.634158  105113 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1025 21:29:49.634161  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634170  105113 command_runner.go:130] >       "size": "31470524",
	I1025 21:29:49.634173  105113 command_runner.go:130] >       "uid": null,
	I1025 21:29:49.634177  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.634181  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.634185  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.634190  105113 command_runner.go:130] >     },
	I1025 21:29:49.634194  105113 command_runner.go:130] >     {
	I1025 21:29:49.634202  105113 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1025 21:29:49.634206  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.634214  105113 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1025 21:29:49.634217  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634222  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.634229  105113 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1025 21:29:49.634238  105113 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1025 21:29:49.634242  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634251  105113 command_runner.go:130] >       "size": "53621675",
	I1025 21:29:49.634257  105113 command_runner.go:130] >       "uid": null,
	I1025 21:29:49.634261  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.634268  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.634272  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.634292  105113 command_runner.go:130] >     },
	I1025 21:29:49.634299  105113 command_runner.go:130] >     {
	I1025 21:29:49.634312  105113 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1025 21:29:49.634318  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.634326  105113 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1025 21:29:49.634330  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634336  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.634344  105113 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1025 21:29:49.634353  105113 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1025 21:29:49.634371  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634382  105113 command_runner.go:130] >       "size": "295456551",
	I1025 21:29:49.634386  105113 command_runner.go:130] >       "uid": {
	I1025 21:29:49.634390  105113 command_runner.go:130] >         "value": "0"
	I1025 21:29:49.634395  105113 command_runner.go:130] >       },
	I1025 21:29:49.634400  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.634404  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.634410  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.634414  105113 command_runner.go:130] >     },
	I1025 21:29:49.634420  105113 command_runner.go:130] >     {
	I1025 21:29:49.634426  105113 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1025 21:29:49.634431  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.634436  105113 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1025 21:29:49.634442  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634447  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.634458  105113 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1025 21:29:49.634467  105113 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1025 21:29:49.634473  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634477  105113 command_runner.go:130] >       "size": "127165392",
	I1025 21:29:49.634481  105113 command_runner.go:130] >       "uid": {
	I1025 21:29:49.634485  105113 command_runner.go:130] >         "value": "0"
	I1025 21:29:49.634488  105113 command_runner.go:130] >       },
	I1025 21:29:49.634494  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.634501  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.634505  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.634511  105113 command_runner.go:130] >     },
	I1025 21:29:49.634514  105113 command_runner.go:130] >     {
	I1025 21:29:49.634521  105113 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1025 21:29:49.634528  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.634533  105113 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1025 21:29:49.634539  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634543  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.634553  105113 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1025 21:29:49.634561  105113 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1025 21:29:49.634567  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634572  105113 command_runner.go:130] >       "size": "123188534",
	I1025 21:29:49.634576  105113 command_runner.go:130] >       "uid": {
	I1025 21:29:49.634580  105113 command_runner.go:130] >         "value": "0"
	I1025 21:29:49.634586  105113 command_runner.go:130] >       },
	I1025 21:29:49.634591  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.634599  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.634603  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.634609  105113 command_runner.go:130] >     },
	I1025 21:29:49.634613  105113 command_runner.go:130] >     {
	I1025 21:29:49.634624  105113 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1025 21:29:49.634631  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.634639  105113 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1025 21:29:49.634643  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634649  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.634656  105113 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1025 21:29:49.634665  105113 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1025 21:29:49.634669  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634674  105113 command_runner.go:130] >       "size": "74691991",
	I1025 21:29:49.634680  105113 command_runner.go:130] >       "uid": null,
	I1025 21:29:49.634684  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.634688  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.634694  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.634698  105113 command_runner.go:130] >     },
	I1025 21:29:49.634705  105113 command_runner.go:130] >     {
	I1025 21:29:49.634714  105113 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1025 21:29:49.634718  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.634724  105113 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1025 21:29:49.634729  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634734  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.634754  105113 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1025 21:29:49.634764  105113 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1025 21:29:49.634768  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634775  105113 command_runner.go:130] >       "size": "61498678",
	I1025 21:29:49.634778  105113 command_runner.go:130] >       "uid": {
	I1025 21:29:49.634785  105113 command_runner.go:130] >         "value": "0"
	I1025 21:29:49.634789  105113 command_runner.go:130] >       },
	I1025 21:29:49.634793  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.634800  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.634804  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.634811  105113 command_runner.go:130] >     },
	I1025 21:29:49.634814  105113 command_runner.go:130] >     {
	I1025 21:29:49.634822  105113 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1025 21:29:49.634829  105113 command_runner.go:130] >       "repoTags": [
	I1025 21:29:49.634833  105113 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1025 21:29:49.634840  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634844  105113 command_runner.go:130] >       "repoDigests": [
	I1025 21:29:49.634850  105113 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1025 21:29:49.634860  105113 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1025 21:29:49.634863  105113 command_runner.go:130] >       ],
	I1025 21:29:49.634868  105113 command_runner.go:130] >       "size": "750414",
	I1025 21:29:49.634875  105113 command_runner.go:130] >       "uid": {
	I1025 21:29:49.634879  105113 command_runner.go:130] >         "value": "65535"
	I1025 21:29:49.634885  105113 command_runner.go:130] >       },
	I1025 21:29:49.634889  105113 command_runner.go:130] >       "username": "",
	I1025 21:29:49.634896  105113 command_runner.go:130] >       "spec": null,
	I1025 21:29:49.634900  105113 command_runner.go:130] >       "pinned": false
	I1025 21:29:49.634904  105113 command_runner.go:130] >     }
	I1025 21:29:49.634909  105113 command_runner.go:130] >   ]
	I1025 21:29:49.634913  105113 command_runner.go:130] > }
	I1025 21:29:49.635016  105113 crio.go:496] all images are preloaded for cri-o runtime.
	I1025 21:29:49.635026  105113 cache_images.go:84] Images are preloaded, skipping loading
	I1025 21:29:49.635087  105113 ssh_runner.go:195] Run: crio config
	I1025 21:29:49.670644  105113 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1025 21:29:49.670676  105113 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1025 21:29:49.670695  105113 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1025 21:29:49.670701  105113 command_runner.go:130] > #
	I1025 21:29:49.670711  105113 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1025 21:29:49.670721  105113 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1025 21:29:49.670734  105113 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1025 21:29:49.670752  105113 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1025 21:29:49.670759  105113 command_runner.go:130] > # reload'.
	I1025 21:29:49.670775  105113 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1025 21:29:49.670790  105113 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1025 21:29:49.670805  105113 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1025 21:29:49.670819  105113 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1025 21:29:49.670828  105113 command_runner.go:130] > [crio]
	I1025 21:29:49.670850  105113 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1025 21:29:49.670864  105113 command_runner.go:130] > # containers images, in this directory.
	I1025 21:29:49.670879  105113 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1025 21:29:49.670891  105113 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1025 21:29:49.670904  105113 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1025 21:29:49.670918  105113 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1025 21:29:49.670931  105113 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1025 21:29:49.670939  105113 command_runner.go:130] > # storage_driver = "vfs"
	I1025 21:29:49.670949  105113 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1025 21:29:49.670960  105113 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1025 21:29:49.670968  105113 command_runner.go:130] > # storage_option = [
	I1025 21:29:49.670974  105113 command_runner.go:130] > # ]
	I1025 21:29:49.670986  105113 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1025 21:29:49.671000  105113 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1025 21:29:49.671013  105113 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1025 21:29:49.671025  105113 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1025 21:29:49.671040  105113 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1025 21:29:49.671049  105113 command_runner.go:130] > # always happen on a node reboot
	I1025 21:29:49.671069  105113 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1025 21:29:49.671080  105113 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1025 21:29:49.671095  105113 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1025 21:29:49.671114  105113 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1025 21:29:49.671127  105113 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1025 21:29:49.671140  105113 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1025 21:29:49.671157  105113 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1025 21:29:49.671167  105113 command_runner.go:130] > # internal_wipe = true
	I1025 21:29:49.671177  105113 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1025 21:29:49.671192  105113 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1025 21:29:49.671204  105113 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1025 21:29:49.671215  105113 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1025 21:29:49.671229  105113 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1025 21:29:49.671238  105113 command_runner.go:130] > [crio.api]
	I1025 21:29:49.671249  105113 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1025 21:29:49.671261  105113 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1025 21:29:49.671271  105113 command_runner.go:130] > # IP address on which the stream server will listen.
	I1025 21:29:49.671283  105113 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1025 21:29:49.671297  105113 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1025 21:29:49.671311  105113 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1025 21:29:49.671321  105113 command_runner.go:130] > # stream_port = "0"
	I1025 21:29:49.671346  105113 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1025 21:29:49.671358  105113 command_runner.go:130] > # stream_enable_tls = false
	I1025 21:29:49.671381  105113 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1025 21:29:49.671391  105113 command_runner.go:130] > # stream_idle_timeout = ""
	I1025 21:29:49.671404  105113 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1025 21:29:49.671419  105113 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1025 21:29:49.671431  105113 command_runner.go:130] > # minutes.
	I1025 21:29:49.671442  105113 command_runner.go:130] > # stream_tls_cert = ""
	I1025 21:29:49.671455  105113 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1025 21:29:49.671466  105113 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1025 21:29:49.671521  105113 command_runner.go:130] > # stream_tls_key = ""
	I1025 21:29:49.671539  105113 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1025 21:29:49.671551  105113 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1025 21:29:49.671565  105113 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1025 21:29:49.671573  105113 command_runner.go:130] > # stream_tls_ca = ""
	I1025 21:29:49.671597  105113 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1025 21:29:49.671609  105113 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1025 21:29:49.671630  105113 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1025 21:29:49.671642  105113 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1025 21:29:49.671677  105113 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1025 21:29:49.671691  105113 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1025 21:29:49.671701  105113 command_runner.go:130] > [crio.runtime]
	I1025 21:29:49.671712  105113 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1025 21:29:49.671725  105113 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1025 21:29:49.671732  105113 command_runner.go:130] > # "nofile=1024:2048"
	I1025 21:29:49.671744  105113 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1025 21:29:49.671754  105113 command_runner.go:130] > # default_ulimits = [
	I1025 21:29:49.671760  105113 command_runner.go:130] > # ]
	I1025 21:29:49.671772  105113 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1025 21:29:49.671783  105113 command_runner.go:130] > # no_pivot = false
	I1025 21:29:49.671801  105113 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1025 21:29:49.671816  105113 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1025 21:29:49.671835  105113 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1025 21:29:49.671851  105113 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1025 21:29:49.671864  105113 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1025 21:29:49.671881  105113 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1025 21:29:49.671894  105113 command_runner.go:130] > # conmon = ""
	I1025 21:29:49.671903  105113 command_runner.go:130] > # Cgroup setting for conmon
	I1025 21:29:49.671913  105113 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1025 21:29:49.671918  105113 command_runner.go:130] > conmon_cgroup = "pod"
	I1025 21:29:49.671927  105113 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1025 21:29:49.671937  105113 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1025 21:29:49.671950  105113 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1025 21:29:49.671958  105113 command_runner.go:130] > # conmon_env = [
	I1025 21:29:49.671963  105113 command_runner.go:130] > # ]
	I1025 21:29:49.671972  105113 command_runner.go:130] > # Additional environment variables to set for all the
	I1025 21:29:49.671979  105113 command_runner.go:130] > # containers. These are overridden if set in the
	I1025 21:29:49.671989  105113 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1025 21:29:49.671997  105113 command_runner.go:130] > # default_env = [
	I1025 21:29:49.672001  105113 command_runner.go:130] > # ]
	I1025 21:29:49.672013  105113 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1025 21:29:49.672027  105113 command_runner.go:130] > # selinux = false
	I1025 21:29:49.672040  105113 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1025 21:29:49.672051  105113 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1025 21:29:49.672062  105113 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1025 21:29:49.672070  105113 command_runner.go:130] > # seccomp_profile = ""
	I1025 21:29:49.672076  105113 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1025 21:29:49.672084  105113 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1025 21:29:49.672089  105113 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1025 21:29:49.672096  105113 command_runner.go:130] > # which might increase security.
	I1025 21:29:49.672101  105113 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1025 21:29:49.672109  105113 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1025 21:29:49.672115  105113 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1025 21:29:49.672123  105113 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1025 21:29:49.672129  105113 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1025 21:29:49.672137  105113 command_runner.go:130] > # This option supports live configuration reload.
	I1025 21:29:49.672141  105113 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1025 21:29:49.672149  105113 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1025 21:29:49.672156  105113 command_runner.go:130] > # the cgroup blockio controller.
	I1025 21:29:49.672162  105113 command_runner.go:130] > # blockio_config_file = ""
	I1025 21:29:49.672175  105113 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1025 21:29:49.672182  105113 command_runner.go:130] > # irqbalance daemon.
	I1025 21:29:49.672187  105113 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1025 21:29:49.672195  105113 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1025 21:29:49.672200  105113 command_runner.go:130] > # This option supports live configuration reload.
	I1025 21:29:49.672205  105113 command_runner.go:130] > # rdt_config_file = ""
	I1025 21:29:49.672210  105113 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1025 21:29:49.672217  105113 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1025 21:29:49.672222  105113 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1025 21:29:49.672229  105113 command_runner.go:130] > # separate_pull_cgroup = ""
	I1025 21:29:49.672235  105113 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1025 21:29:49.672243  105113 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1025 21:29:49.672247  105113 command_runner.go:130] > # will be added.
	I1025 21:29:49.672254  105113 command_runner.go:130] > # default_capabilities = [
	I1025 21:29:49.672258  105113 command_runner.go:130] > # 	"CHOWN",
	I1025 21:29:49.672263  105113 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1025 21:29:49.672267  105113 command_runner.go:130] > # 	"FSETID",
	I1025 21:29:49.672276  105113 command_runner.go:130] > # 	"FOWNER",
	I1025 21:29:49.672280  105113 command_runner.go:130] > # 	"SETGID",
	I1025 21:29:49.672285  105113 command_runner.go:130] > # 	"SETUID",
	I1025 21:29:49.672290  105113 command_runner.go:130] > # 	"SETPCAP",
	I1025 21:29:49.672299  105113 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1025 21:29:49.672303  105113 command_runner.go:130] > # 	"KILL",
	I1025 21:29:49.672306  105113 command_runner.go:130] > # ]
	I1025 21:29:49.672316  105113 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1025 21:29:49.672355  105113 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1025 21:29:49.672365  105113 command_runner.go:130] > # add_inheritable_capabilities = true
	I1025 21:29:49.672370  105113 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1025 21:29:49.672376  105113 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1025 21:29:49.672380  105113 command_runner.go:130] > # default_sysctls = [
	I1025 21:29:49.672383  105113 command_runner.go:130] > # ]
	I1025 21:29:49.672388  105113 command_runner.go:130] > # List of devices on the host that a
	I1025 21:29:49.672393  105113 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1025 21:29:49.672400  105113 command_runner.go:130] > # allowed_devices = [
	I1025 21:29:49.672404  105113 command_runner.go:130] > # 	"/dev/fuse",
	I1025 21:29:49.672409  105113 command_runner.go:130] > # ]
	I1025 21:29:49.672414  105113 command_runner.go:130] > # List of additional devices. specified as
	I1025 21:29:49.672447  105113 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1025 21:29:49.672455  105113 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1025 21:29:49.672460  105113 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1025 21:29:49.672466  105113 command_runner.go:130] > # additional_devices = [
	I1025 21:29:49.672473  105113 command_runner.go:130] > # ]
	I1025 21:29:49.672478  105113 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1025 21:29:49.672482  105113 command_runner.go:130] > # cdi_spec_dirs = [
	I1025 21:29:49.672486  105113 command_runner.go:130] > # 	"/etc/cdi",
	I1025 21:29:49.672490  105113 command_runner.go:130] > # 	"/var/run/cdi",
	I1025 21:29:49.672495  105113 command_runner.go:130] > # ]
	I1025 21:29:49.672501  105113 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1025 21:29:49.672513  105113 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1025 21:29:49.672517  105113 command_runner.go:130] > # Defaults to false.
	I1025 21:29:49.672525  105113 command_runner.go:130] > # device_ownership_from_security_context = false
	I1025 21:29:49.672531  105113 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1025 21:29:49.672539  105113 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1025 21:29:49.672549  105113 command_runner.go:130] > # hooks_dir = [
	I1025 21:29:49.672555  105113 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1025 21:29:49.672562  105113 command_runner.go:130] > # ]
	I1025 21:29:49.672568  105113 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1025 21:29:49.672585  105113 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1025 21:29:49.672593  105113 command_runner.go:130] > # its default mounts from the following two files:
	I1025 21:29:49.672596  105113 command_runner.go:130] > #
	I1025 21:29:49.672604  105113 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1025 21:29:49.672610  105113 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1025 21:29:49.672618  105113 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1025 21:29:49.672624  105113 command_runner.go:130] > #
	I1025 21:29:49.672632  105113 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1025 21:29:49.672639  105113 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1025 21:29:49.672645  105113 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1025 21:29:49.672650  105113 command_runner.go:130] > #      only add mounts it finds in this file.
	I1025 21:29:49.672656  105113 command_runner.go:130] > #
	I1025 21:29:49.672663  105113 command_runner.go:130] > # default_mounts_file = ""
	I1025 21:29:49.672671  105113 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1025 21:29:49.672679  105113 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1025 21:29:49.672686  105113 command_runner.go:130] > # pids_limit = 0
	I1025 21:29:49.672691  105113 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1025 21:29:49.672699  105113 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1025 21:29:49.672706  105113 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1025 21:29:49.672715  105113 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1025 21:29:49.672719  105113 command_runner.go:130] > # log_size_max = -1
	I1025 21:29:49.672728  105113 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1025 21:29:49.672734  105113 command_runner.go:130] > # log_to_journald = false
	I1025 21:29:49.672743  105113 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1025 21:29:49.672748  105113 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1025 21:29:49.672756  105113 command_runner.go:130] > # Path to directory for container attach sockets.
	I1025 21:29:49.672761  105113 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1025 21:29:49.672768  105113 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1025 21:29:49.672772  105113 command_runner.go:130] > # bind_mount_prefix = ""
	I1025 21:29:49.672780  105113 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1025 21:29:49.672786  105113 command_runner.go:130] > # read_only = false
	I1025 21:29:49.672794  105113 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1025 21:29:49.672804  105113 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1025 21:29:49.672811  105113 command_runner.go:130] > # live configuration reload.
	I1025 21:29:49.672815  105113 command_runner.go:130] > # log_level = "info"
	I1025 21:29:49.672822  105113 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1025 21:29:49.672827  105113 command_runner.go:130] > # This option supports live configuration reload.
	I1025 21:29:49.672833  105113 command_runner.go:130] > # log_filter = ""
	I1025 21:29:49.672839  105113 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1025 21:29:49.672847  105113 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1025 21:29:49.672851  105113 command_runner.go:130] > # separated by comma.
	I1025 21:29:49.672855  105113 command_runner.go:130] > # uid_mappings = ""
	I1025 21:29:49.672863  105113 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1025 21:29:49.672869  105113 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1025 21:29:49.672875  105113 command_runner.go:130] > # separated by comma.
	I1025 21:29:49.672879  105113 command_runner.go:130] > # gid_mappings = ""
	I1025 21:29:49.672885  105113 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1025 21:29:49.672893  105113 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1025 21:29:49.672899  105113 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1025 21:29:49.672905  105113 command_runner.go:130] > # minimum_mappable_uid = -1
	I1025 21:29:49.672929  105113 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1025 21:29:49.672937  105113 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1025 21:29:49.672943  105113 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1025 21:29:49.672950  105113 command_runner.go:130] > # minimum_mappable_gid = -1
	I1025 21:29:49.672955  105113 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1025 21:29:49.672963  105113 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1025 21:29:49.672972  105113 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1025 21:29:49.672982  105113 command_runner.go:130] > # ctr_stop_timeout = 30
	I1025 21:29:49.672990  105113 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1025 21:29:49.673001  105113 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1025 21:29:49.673015  105113 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1025 21:29:49.673025  105113 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1025 21:29:49.673031  105113 command_runner.go:130] > # drop_infra_ctr = true
	I1025 21:29:49.673045  105113 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1025 21:29:49.673055  105113 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1025 21:29:49.673065  105113 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1025 21:29:49.673071  105113 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1025 21:29:49.673076  105113 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1025 21:29:49.673087  105113 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1025 21:29:49.673094  105113 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1025 21:29:49.673101  105113 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1025 21:29:49.673107  105113 command_runner.go:130] > # pinns_path = ""
	I1025 21:29:49.673113  105113 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1025 21:29:49.673121  105113 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1025 21:29:49.673127  105113 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1025 21:29:49.673134  105113 command_runner.go:130] > # default_runtime = "runc"
	I1025 21:29:49.673152  105113 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1025 21:29:49.673162  105113 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1025 21:29:49.673170  105113 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1025 21:29:49.673177  105113 command_runner.go:130] > # creation as a file is not desired either.
	I1025 21:29:49.673185  105113 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1025 21:29:49.673192  105113 command_runner.go:130] > # the hostname is being managed dynamically.
	I1025 21:29:49.673197  105113 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1025 21:29:49.673201  105113 command_runner.go:130] > # ]
	I1025 21:29:49.673207  105113 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1025 21:29:49.673216  105113 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1025 21:29:49.673224  105113 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1025 21:29:49.673232  105113 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1025 21:29:49.673238  105113 command_runner.go:130] > #
	I1025 21:29:49.673243  105113 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1025 21:29:49.673250  105113 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1025 21:29:49.673254  105113 command_runner.go:130] > #  runtime_type = "oci"
	I1025 21:29:49.673259  105113 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1025 21:29:49.673266  105113 command_runner.go:130] > #  privileged_without_host_devices = false
	I1025 21:29:49.673270  105113 command_runner.go:130] > #  allowed_annotations = []
	I1025 21:29:49.673273  105113 command_runner.go:130] > # Where:
	I1025 21:29:49.673278  105113 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1025 21:29:49.673289  105113 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1025 21:29:49.673297  105113 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1025 21:29:49.673303  105113 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1025 21:29:49.673314  105113 command_runner.go:130] > #   in $PATH.
	I1025 21:29:49.673322  105113 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1025 21:29:49.673327  105113 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1025 21:29:49.673341  105113 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1025 21:29:49.673350  105113 command_runner.go:130] > #   state.
	I1025 21:29:49.673356  105113 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1025 21:29:49.673364  105113 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1025 21:29:49.673370  105113 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1025 21:29:49.673378  105113 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1025 21:29:49.673384  105113 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1025 21:29:49.673393  105113 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1025 21:29:49.673398  105113 command_runner.go:130] > #   The currently recognized values are:
	I1025 21:29:49.673407  105113 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1025 21:29:49.673414  105113 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1025 21:29:49.673422  105113 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1025 21:29:49.673427  105113 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1025 21:29:49.673437  105113 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1025 21:29:49.673465  105113 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1025 21:29:49.673474  105113 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1025 21:29:49.673480  105113 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1025 21:29:49.673487  105113 command_runner.go:130] > #   should be moved to the container's cgroup
	I1025 21:29:49.673492  105113 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1025 21:29:49.673503  105113 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1025 21:29:49.673512  105113 command_runner.go:130] > runtime_type = "oci"
	I1025 21:29:49.673519  105113 command_runner.go:130] > runtime_root = "/run/runc"
	I1025 21:29:49.673529  105113 command_runner.go:130] > runtime_config_path = ""
	I1025 21:29:49.673535  105113 command_runner.go:130] > monitor_path = ""
	I1025 21:29:49.673545  105113 command_runner.go:130] > monitor_cgroup = ""
	I1025 21:29:49.673552  105113 command_runner.go:130] > monitor_exec_cgroup = ""
	I1025 21:29:49.673603  105113 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1025 21:29:49.673610  105113 command_runner.go:130] > # running containers
	I1025 21:29:49.673614  105113 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1025 21:29:49.673622  105113 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1025 21:29:49.673635  105113 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1025 21:29:49.673643  105113 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1025 21:29:49.673648  105113 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1025 21:29:49.673655  105113 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1025 21:29:49.673659  105113 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1025 21:29:49.673666  105113 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1025 21:29:49.673689  105113 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1025 21:29:49.673704  105113 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1025 21:29:49.673711  105113 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1025 21:29:49.673718  105113 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1025 21:29:49.673724  105113 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1025 21:29:49.673733  105113 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1025 21:29:49.673741  105113 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1025 21:29:49.673749  105113 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1025 21:29:49.673758  105113 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1025 21:29:49.673768  105113 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1025 21:29:49.673773  105113 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1025 21:29:49.673782  105113 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1025 21:29:49.673786  105113 command_runner.go:130] > # Example:
	I1025 21:29:49.673793  105113 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1025 21:29:49.673798  105113 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1025 21:29:49.673805  105113 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1025 21:29:49.673810  105113 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1025 21:29:49.673816  105113 command_runner.go:130] > # cpuset = 0
	I1025 21:29:49.673821  105113 command_runner.go:130] > # cpushares = "0-1"
	I1025 21:29:49.673826  105113 command_runner.go:130] > # Where:
	I1025 21:29:49.673833  105113 command_runner.go:130] > # The workload name is workload-type.
	I1025 21:29:49.673840  105113 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1025 21:29:49.673847  105113 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1025 21:29:49.673853  105113 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1025 21:29:49.673869  105113 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1025 21:29:49.673881  105113 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1025 21:29:49.673890  105113 command_runner.go:130] > # 
	I1025 21:29:49.673900  105113 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1025 21:29:49.673908  105113 command_runner.go:130] > #
	I1025 21:29:49.673922  105113 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1025 21:29:49.673936  105113 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1025 21:29:49.673947  105113 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1025 21:29:49.673959  105113 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1025 21:29:49.673971  105113 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1025 21:29:49.673979  105113 command_runner.go:130] > [crio.image]
	I1025 21:29:49.673988  105113 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1025 21:29:49.673999  105113 command_runner.go:130] > # default_transport = "docker://"
	I1025 21:29:49.674016  105113 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1025 21:29:49.674024  105113 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1025 21:29:49.674029  105113 command_runner.go:130] > # global_auth_file = ""
	I1025 21:29:49.674035  105113 command_runner.go:130] > # The image used to instantiate infra containers.
	I1025 21:29:49.674040  105113 command_runner.go:130] > # This option supports live configuration reload.
	I1025 21:29:49.674047  105113 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1025 21:29:49.674054  105113 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1025 21:29:49.674060  105113 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1025 21:29:49.674065  105113 command_runner.go:130] > # This option supports live configuration reload.
	I1025 21:29:49.674074  105113 command_runner.go:130] > # pause_image_auth_file = ""
	I1025 21:29:49.674088  105113 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1025 21:29:49.674100  105113 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1025 21:29:49.674109  105113 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1025 21:29:49.674122  105113 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1025 21:29:49.674129  105113 command_runner.go:130] > # pause_command = "/pause"
	I1025 21:29:49.674134  105113 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1025 21:29:49.674147  105113 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1025 21:29:49.674156  105113 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1025 21:29:49.674173  105113 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1025 21:29:49.674185  105113 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1025 21:29:49.674192  105113 command_runner.go:130] > # signature_policy = ""
	I1025 21:29:49.674199  105113 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1025 21:29:49.674207  105113 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1025 21:29:49.674212  105113 command_runner.go:130] > # changing them here.
	I1025 21:29:49.674218  105113 command_runner.go:130] > # insecure_registries = [
	I1025 21:29:49.674222  105113 command_runner.go:130] > # ]
	I1025 21:29:49.674229  105113 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1025 21:29:49.674235  105113 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1025 21:29:49.674243  105113 command_runner.go:130] > # image_volumes = "mkdir"
	I1025 21:29:49.674249  105113 command_runner.go:130] > # Temporary directory to use for storing big files
	I1025 21:29:49.674255  105113 command_runner.go:130] > # big_files_temporary_dir = ""
	I1025 21:29:49.674261  105113 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1025 21:29:49.674267  105113 command_runner.go:130] > # CNI plugins.
	I1025 21:29:49.674271  105113 command_runner.go:130] > [crio.network]
	I1025 21:29:49.674290  105113 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1025 21:29:49.674302  105113 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1025 21:29:49.674314  105113 command_runner.go:130] > # cni_default_network = ""
	I1025 21:29:49.674333  105113 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1025 21:29:49.674349  105113 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1025 21:29:49.674361  105113 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1025 21:29:49.674371  105113 command_runner.go:130] > # plugin_dirs = [
	I1025 21:29:49.674376  105113 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1025 21:29:49.674380  105113 command_runner.go:130] > # ]
	I1025 21:29:49.674386  105113 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1025 21:29:49.674392  105113 command_runner.go:130] > [crio.metrics]
	I1025 21:29:49.674397  105113 command_runner.go:130] > # Globally enable or disable metrics support.
	I1025 21:29:49.674401  105113 command_runner.go:130] > # enable_metrics = false
	I1025 21:29:49.674409  105113 command_runner.go:130] > # Specify enabled metrics collectors.
	I1025 21:29:49.674414  105113 command_runner.go:130] > # Per default all metrics are enabled.
	I1025 21:29:49.674422  105113 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1025 21:29:49.674428  105113 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1025 21:29:49.674436  105113 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1025 21:29:49.674440  105113 command_runner.go:130] > # metrics_collectors = [
	I1025 21:29:49.674446  105113 command_runner.go:130] > # 	"operations",
	I1025 21:29:49.674453  105113 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1025 21:29:49.674458  105113 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1025 21:29:49.674462  105113 command_runner.go:130] > # 	"operations_errors",
	I1025 21:29:49.674469  105113 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1025 21:29:49.674473  105113 command_runner.go:130] > # 	"image_pulls_by_name",
	I1025 21:29:49.674477  105113 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1025 21:29:49.674483  105113 command_runner.go:130] > # 	"image_pulls_failures",
	I1025 21:29:49.674488  105113 command_runner.go:130] > # 	"image_pulls_successes",
	I1025 21:29:49.674494  105113 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1025 21:29:49.674498  105113 command_runner.go:130] > # 	"image_layer_reuse",
	I1025 21:29:49.674504  105113 command_runner.go:130] > # 	"containers_oom_total",
	I1025 21:29:49.674508  105113 command_runner.go:130] > # 	"containers_oom",
	I1025 21:29:49.674516  105113 command_runner.go:130] > # 	"processes_defunct",
	I1025 21:29:49.674520  105113 command_runner.go:130] > # 	"operations_total",
	I1025 21:29:49.674526  105113 command_runner.go:130] > # 	"operations_latency_seconds",
	I1025 21:29:49.674531  105113 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1025 21:29:49.674537  105113 command_runner.go:130] > # 	"operations_errors_total",
	I1025 21:29:49.674541  105113 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1025 21:29:49.674551  105113 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1025 21:29:49.674558  105113 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1025 21:29:49.674565  105113 command_runner.go:130] > # 	"image_pulls_success_total",
	I1025 21:29:49.674569  105113 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1025 21:29:49.674576  105113 command_runner.go:130] > # 	"containers_oom_count_total",
	I1025 21:29:49.674579  105113 command_runner.go:130] > # ]
	I1025 21:29:49.674584  105113 command_runner.go:130] > # The port on which the metrics server will listen.
	I1025 21:29:49.674590  105113 command_runner.go:130] > # metrics_port = 9090
	I1025 21:29:49.674598  105113 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1025 21:29:49.674605  105113 command_runner.go:130] > # metrics_socket = ""
	I1025 21:29:49.674610  105113 command_runner.go:130] > # The certificate for the secure metrics server.
	I1025 21:29:49.674616  105113 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1025 21:29:49.674627  105113 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1025 21:29:49.674635  105113 command_runner.go:130] > # certificate on any modification event.
	I1025 21:29:49.674639  105113 command_runner.go:130] > # metrics_cert = ""
	I1025 21:29:49.674647  105113 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1025 21:29:49.674652  105113 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1025 21:29:49.674658  105113 command_runner.go:130] > # metrics_key = ""
	I1025 21:29:49.674666  105113 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1025 21:29:49.674672  105113 command_runner.go:130] > [crio.tracing]
	I1025 21:29:49.674677  105113 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1025 21:29:49.674684  105113 command_runner.go:130] > # enable_tracing = false
	I1025 21:29:49.674690  105113 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1025 21:29:49.674697  105113 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1025 21:29:49.674702  105113 command_runner.go:130] > # Number of samples to collect per million spans.
	I1025 21:29:49.674707  105113 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1025 21:29:49.674715  105113 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1025 21:29:49.674719  105113 command_runner.go:130] > [crio.stats]
	I1025 21:29:49.674724  105113 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1025 21:29:49.674731  105113 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1025 21:29:49.674735  105113 command_runner.go:130] > # stats_collection_period = 0
	I1025 21:29:49.674778  105113 command_runner.go:130] ! time="2023-10-25 21:29:49.668505204Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1025 21:29:49.674792  105113 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1025 21:29:49.674871  105113 cni.go:84] Creating CNI manager for ""
	I1025 21:29:49.674881  105113 cni.go:136] 1 nodes found, recommending kindnet
	I1025 21:29:49.674897  105113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 21:29:49.674922  105113 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-874778 NodeName:multinode-874778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 21:29:49.675065  105113 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-874778"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 21:29:49.675131  105113 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-874778 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-874778 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 21:29:49.675183  105113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 21:29:49.683075  105113 command_runner.go:130] > kubeadm
	I1025 21:29:49.683088  105113 command_runner.go:130] > kubectl
	I1025 21:29:49.683092  105113 command_runner.go:130] > kubelet
	I1025 21:29:49.683111  105113 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 21:29:49.683176  105113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 21:29:49.690703  105113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1025 21:29:49.705793  105113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 21:29:49.720594  105113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1025 21:29:49.735341  105113 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1025 21:29:49.738195  105113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:29:49.747320  105113 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778 for IP: 192.168.58.2
	I1025 21:29:49.747360  105113 certs.go:190] acquiring lock for shared ca certs: {Name:mk35413dbabac2652d1fa66d4e17d237360108a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:29:49.747499  105113 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key
	I1025 21:29:49.747556  105113 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key
	I1025 21:29:49.747614  105113 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.key
	I1025 21:29:49.747637  105113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.crt with IP's: []
	I1025 21:29:49.850060  105113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.crt ...
	I1025 21:29:49.850087  105113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.crt: {Name:mkc568a45653da60aabbbb0696891be0bb1cf42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:29:49.850239  105113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.key ...
	I1025 21:29:49.850250  105113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.key: {Name:mkd7d672804a43bcfdeee8627a6f0583a0bd2468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:29:49.850338  105113 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.key.cee25041
	I1025 21:29:49.850357  105113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 21:29:49.993738  105113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.crt.cee25041 ...
	I1025 21:29:49.993769  105113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.crt.cee25041: {Name:mk8b4522d1015a8fee41216f7c3928f8f3ff00f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:29:49.993921  105113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.key.cee25041 ...
	I1025 21:29:49.993932  105113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.key.cee25041: {Name:mk3547b92762a8be7c2dac41f17167341b4a1d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:29:49.994002  105113 certs.go:337] copying /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.crt
	I1025 21:29:49.994082  105113 certs.go:341] copying /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.key
	I1025 21:29:49.994138  105113 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/proxy-client.key
	I1025 21:29:49.994155  105113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/proxy-client.crt with IP's: []
	I1025 21:29:50.337214  105113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/proxy-client.crt ...
	I1025 21:29:50.337243  105113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/proxy-client.crt: {Name:mk706bad323eac94153a0f1d889b8016722fac09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:29:50.337440  105113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/proxy-client.key ...
	I1025 21:29:50.337455  105113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/proxy-client.key: {Name:mk750fc966d7aaac16b0e41aa95bceb16b5657ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:29:50.337558  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 21:29:50.337580  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 21:29:50.337593  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 21:29:50.337604  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 21:29:50.337621  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 21:29:50.337633  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 21:29:50.337646  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 21:29:50.337659  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 21:29:50.337706  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323.pem (1338 bytes)
	W1025 21:29:50.337741  105113 certs.go:433] ignoring /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323_empty.pem, impossibly tiny 0 bytes
	I1025 21:29:50.337753  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 21:29:50.337780  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem (1078 bytes)
	I1025 21:29:50.337804  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem (1123 bytes)
	I1025 21:29:50.337830  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem (1675 bytes)
	I1025 21:29:50.337865  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem (1708 bytes)
	I1025 21:29:50.337891  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:29:50.337905  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323.pem -> /usr/share/ca-certificates/18323.pem
	I1025 21:29:50.337916  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> /usr/share/ca-certificates/183232.pem
	I1025 21:29:50.338426  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 21:29:50.359876  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 21:29:50.379845  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 21:29:50.399572  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 21:29:50.419820  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 21:29:50.438965  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 21:29:50.458503  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 21:29:50.478073  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 21:29:50.497635  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 21:29:50.517147  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323.pem --> /usr/share/ca-certificates/18323.pem (1338 bytes)
	I1025 21:29:50.537138  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem --> /usr/share/ca-certificates/183232.pem (1708 bytes)
	I1025 21:29:50.556972  105113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 21:29:50.571870  105113 ssh_runner.go:195] Run: openssl version
	I1025 21:29:50.576465  105113 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1025 21:29:50.576541  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 21:29:50.584420  105113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:29:50.587343  105113 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 25 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:29:50.587375  105113 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:29:50.587413  105113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:29:50.593082  105113 command_runner.go:130] > b5213941
	I1025 21:29:50.593302  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 21:29:50.601534  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18323.pem && ln -fs /usr/share/ca-certificates/18323.pem /etc/ssl/certs/18323.pem"
	I1025 21:29:50.609107  105113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18323.pem
	I1025 21:29:50.611978  105113 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 25 21:17 /usr/share/ca-certificates/18323.pem
	I1025 21:29:50.612001  105113 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:17 /usr/share/ca-certificates/18323.pem
	I1025 21:29:50.612028  105113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18323.pem
	I1025 21:29:50.617604  105113 command_runner.go:130] > 51391683
	I1025 21:29:50.617782  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18323.pem /etc/ssl/certs/51391683.0"
	I1025 21:29:50.625645  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183232.pem && ln -fs /usr/share/ca-certificates/183232.pem /etc/ssl/certs/183232.pem"
	I1025 21:29:50.633571  105113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183232.pem
	I1025 21:29:50.636454  105113 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 25 21:17 /usr/share/ca-certificates/183232.pem
	I1025 21:29:50.636491  105113 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:17 /usr/share/ca-certificates/183232.pem
	I1025 21:29:50.636523  105113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183232.pem
	I1025 21:29:50.642301  105113 command_runner.go:130] > 3ec20f2e
	I1025 21:29:50.642347  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183232.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 21:29:50.649988  105113 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 21:29:50.652708  105113 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 21:29:50.652737  105113 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 21:29:50.652770  105113 kubeadm.go:404] StartCluster: {Name:multinode-874778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-874778 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:29:50.652848  105113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 21:29:50.652878  105113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 21:29:50.684729  105113 cri.go:89] found id: ""
	I1025 21:29:50.684784  105113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 21:29:50.692968  105113 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1025 21:29:50.692994  105113 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1025 21:29:50.693002  105113 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1025 21:29:50.693080  105113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 21:29:50.700530  105113 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 21:29:50.700581  105113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 21:29:50.707669  105113 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1025 21:29:50.707687  105113 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1025 21:29:50.707695  105113 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1025 21:29:50.707703  105113 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 21:29:50.707722  105113 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 21:29:50.707750  105113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 21:29:50.784288  105113 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1025 21:29:50.784301  105113 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1025 21:29:50.844557  105113 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 21:29:50.844574  105113 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 21:29:59.035980  105113 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1025 21:29:59.036011  105113 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1025 21:29:59.036064  105113 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 21:29:59.036076  105113 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 21:29:59.036221  105113 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1025 21:29:59.036247  105113 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1025 21:29:59.036319  105113 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1025 21:29:59.036331  105113 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-gcp
	I1025 21:29:59.036372  105113 kubeadm.go:322] OS: Linux
	I1025 21:29:59.036382  105113 command_runner.go:130] > OS: Linux
	I1025 21:29:59.036432  105113 kubeadm.go:322] CGROUPS_CPU: enabled
	I1025 21:29:59.036445  105113 command_runner.go:130] > CGROUPS_CPU: enabled
	I1025 21:29:59.036512  105113 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1025 21:29:59.036528  105113 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1025 21:29:59.036587  105113 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1025 21:29:59.036603  105113 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1025 21:29:59.036665  105113 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1025 21:29:59.036677  105113 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1025 21:29:59.036737  105113 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1025 21:29:59.036747  105113 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1025 21:29:59.036806  105113 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1025 21:29:59.036818  105113 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1025 21:29:59.036878  105113 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1025 21:29:59.036890  105113 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1025 21:29:59.036948  105113 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1025 21:29:59.036960  105113 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1025 21:29:59.037021  105113 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1025 21:29:59.037032  105113 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1025 21:29:59.037120  105113 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 21:29:59.037132  105113 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 21:29:59.037241  105113 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 21:29:59.037249  105113 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 21:29:59.037364  105113 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 21:29:59.037378  105113 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 21:29:59.037459  105113 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 21:29:59.039160  105113 out.go:204]   - Generating certificates and keys ...
	I1025 21:29:59.037581  105113 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 21:29:59.039265  105113 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 21:29:59.039281  105113 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1025 21:29:59.039362  105113 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 21:29:59.039373  105113 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1025 21:29:59.039432  105113 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 21:29:59.039439  105113 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 21:29:59.039520  105113 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 21:29:59.039543  105113 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1025 21:29:59.039617  105113 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 21:29:59.039629  105113 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1025 21:29:59.039709  105113 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 21:29:59.039731  105113 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1025 21:29:59.039803  105113 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 21:29:59.039815  105113 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1025 21:29:59.039969  105113 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-874778] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 21:29:59.039980  105113 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-874778] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 21:29:59.040049  105113 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 21:29:59.040060  105113 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1025 21:29:59.040191  105113 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-874778] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 21:29:59.040201  105113 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-874778] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 21:29:59.040283  105113 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 21:29:59.040293  105113 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 21:29:59.040371  105113 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 21:29:59.040381  105113 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 21:29:59.040439  105113 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 21:29:59.040454  105113 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1025 21:29:59.040543  105113 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 21:29:59.040557  105113 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 21:29:59.040624  105113 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 21:29:59.040639  105113 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 21:29:59.040707  105113 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 21:29:59.040717  105113 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 21:29:59.040798  105113 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 21:29:59.040808  105113 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 21:29:59.040878  105113 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 21:29:59.040888  105113 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 21:29:59.040992  105113 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 21:29:59.041002  105113 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 21:29:59.041087  105113 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 21:29:59.042618  105113 out.go:204]   - Booting up control plane ...
	I1025 21:29:59.041190  105113 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 21:29:59.042694  105113 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 21:29:59.042704  105113 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 21:29:59.042762  105113 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 21:29:59.042769  105113 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 21:29:59.042822  105113 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 21:29:59.042829  105113 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 21:29:59.042929  105113 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 21:29:59.042937  105113 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 21:29:59.043038  105113 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 21:29:59.043050  105113 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 21:29:59.043106  105113 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 21:29:59.043124  105113 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1025 21:29:59.043284  105113 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 21:29:59.043308  105113 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 21:29:59.043417  105113 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.501887 seconds
	I1025 21:29:59.043442  105113 command_runner.go:130] > [apiclient] All control plane components are healthy after 4.501887 seconds
	I1025 21:29:59.043561  105113 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 21:29:59.043576  105113 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 21:29:59.043721  105113 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 21:29:59.043733  105113 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 21:29:59.043813  105113 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 21:29:59.043824  105113 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1025 21:29:59.043993  105113 kubeadm.go:322] [mark-control-plane] Marking the node multinode-874778 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 21:29:59.044002  105113 command_runner.go:130] > [mark-control-plane] Marking the node multinode-874778 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 21:29:59.044064  105113 kubeadm.go:322] [bootstrap-token] Using token: qh5zy4.e6dupwjxobyzh70x
	I1025 21:29:59.045611  105113 out.go:204]   - Configuring RBAC rules ...
	I1025 21:29:59.044086  105113 command_runner.go:130] > [bootstrap-token] Using token: qh5zy4.e6dupwjxobyzh70x
	I1025 21:29:59.045739  105113 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 21:29:59.045753  105113 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 21:29:59.045862  105113 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 21:29:59.045872  105113 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 21:29:59.046082  105113 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 21:29:59.046097  105113 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 21:29:59.046271  105113 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 21:29:59.046302  105113 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 21:29:59.046454  105113 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 21:29:59.046470  105113 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 21:29:59.046580  105113 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 21:29:59.046604  105113 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 21:29:59.046787  105113 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 21:29:59.046804  105113 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 21:29:59.046872  105113 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 21:29:59.046886  105113 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1025 21:29:59.046953  105113 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 21:29:59.046965  105113 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1025 21:29:59.046972  105113 kubeadm.go:322] 
	I1025 21:29:59.047073  105113 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 21:29:59.047091  105113 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1025 21:29:59.047107  105113 kubeadm.go:322] 
	I1025 21:29:59.047203  105113 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 21:29:59.047215  105113 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1025 21:29:59.047221  105113 kubeadm.go:322] 
	I1025 21:29:59.047250  105113 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 21:29:59.047259  105113 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1025 21:29:59.047353  105113 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 21:29:59.047364  105113 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 21:29:59.047433  105113 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 21:29:59.047443  105113 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 21:29:59.047449  105113 kubeadm.go:322] 
	I1025 21:29:59.047521  105113 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1025 21:29:59.047531  105113 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1025 21:29:59.047536  105113 kubeadm.go:322] 
	I1025 21:29:59.047576  105113 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 21:29:59.047582  105113 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 21:29:59.047585  105113 kubeadm.go:322] 
	I1025 21:29:59.047627  105113 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 21:29:59.047636  105113 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1025 21:29:59.047728  105113 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 21:29:59.047738  105113 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 21:29:59.047823  105113 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 21:29:59.047833  105113 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 21:29:59.047839  105113 kubeadm.go:322] 
	I1025 21:29:59.047943  105113 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 21:29:59.047953  105113 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1025 21:29:59.048044  105113 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 21:29:59.048053  105113 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1025 21:29:59.048059  105113 kubeadm.go:322] 
	I1025 21:29:59.048152  105113 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qh5zy4.e6dupwjxobyzh70x \
	I1025 21:29:59.048162  105113 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token qh5zy4.e6dupwjxobyzh70x \
	I1025 21:29:59.048272  105113 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 \
	I1025 21:29:59.048282  105113 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 \
	I1025 21:29:59.048312  105113 kubeadm.go:322] 	--control-plane 
	I1025 21:29:59.048322  105113 command_runner.go:130] > 	--control-plane 
	I1025 21:29:59.048330  105113 kubeadm.go:322] 
	I1025 21:29:59.048435  105113 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 21:29:59.048444  105113 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1025 21:29:59.048450  105113 kubeadm.go:322] 
	I1025 21:29:59.048557  105113 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qh5zy4.e6dupwjxobyzh70x \
	I1025 21:29:59.048567  105113 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token qh5zy4.e6dupwjxobyzh70x \
	I1025 21:29:59.048674  105113 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 
	I1025 21:29:59.048684  105113 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 
	I1025 21:29:59.048705  105113 cni.go:84] Creating CNI manager for ""
	I1025 21:29:59.048716  105113 cni.go:136] 1 nodes found, recommending kindnet
	I1025 21:29:59.050384  105113 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 21:29:59.051753  105113 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 21:29:59.055241  105113 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1025 21:29:59.055263  105113 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1025 21:29:59.055273  105113 command_runner.go:130] > Device: 33h/51d	Inode: 555944      Links: 1
	I1025 21:29:59.055284  105113 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 21:29:59.055294  105113 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1025 21:29:59.055302  105113 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1025 21:29:59.055313  105113 command_runner.go:130] > Change: 2023-10-25 21:11:12.434356897 +0000
	I1025 21:29:59.055343  105113 command_runner.go:130] >  Birth: 2023-10-25 21:11:12.410354451 +0000
	I1025 21:29:59.055393  105113 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1025 21:29:59.055406  105113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 21:29:59.072881  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 21:29:59.717119  105113 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1025 21:29:59.728277  105113 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1025 21:29:59.734989  105113 command_runner.go:130] > serviceaccount/kindnet created
	I1025 21:29:59.743594  105113 command_runner.go:130] > daemonset.apps/kindnet created
	I1025 21:29:59.748112  105113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 21:29:59.748168  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:29:59.748201  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=multinode-874778 minikube.k8s.io/updated_at=2023_10_25T21_29_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:29:59.755080  105113 command_runner.go:130] > -16
	I1025 21:29:59.755104  105113 ops.go:34] apiserver oom_adj: -16
	I1025 21:29:59.832721  105113 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1025 21:29:59.836329  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:29:59.841088  105113 command_runner.go:130] > node/multinode-874778 labeled
	I1025 21:29:59.898840  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:29:59.898940  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:29:59.966066  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:00.466871  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:00.532181  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:00.967039  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:01.031594  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:01.467205  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:01.526935  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:01.967014  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:02.026135  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:02.467047  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:02.526082  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:02.967059  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:03.028494  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:03.466325  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:03.529774  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:03.966295  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:04.027475  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:04.466923  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:04.527738  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:04.967102  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:05.029116  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:05.466407  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:05.525786  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:05.966771  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:06.027040  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:06.467339  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:06.527982  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:06.966323  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:07.028596  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:07.467193  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:07.529973  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:07.966402  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:08.027779  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:08.467211  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:08.530974  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:08.966402  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:09.026371  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:09.466418  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:09.528371  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:09.967135  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:10.025868  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:10.467132  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:10.528391  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:10.967223  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:11.031129  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:11.466416  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:11.530214  105113 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 21:30:11.966529  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:30:12.026692  105113 command_runner.go:130] > NAME      SECRETS   AGE
	I1025 21:30:12.026714  105113 command_runner.go:130] > default   0         1s
	I1025 21:30:12.029103  105113 kubeadm.go:1081] duration metric: took 12.280985s to wait for elevateKubeSystemPrivileges.
	I1025 21:30:12.029130  105113 kubeadm.go:406] StartCluster complete in 21.37636351s
	I1025 21:30:12.029153  105113 settings.go:142] acquiring lock: {Name:mkdc9277e8465489704340df47f71845c1a0d579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:30:12.029246  105113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:30:12.029993  105113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-11542/kubeconfig: {Name:mk64fd87b209032b3c81ef85df6a4de19f21a5bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:30:12.030236  105113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 21:30:12.030391  105113 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 21:30:12.030476  105113 addons.go:69] Setting storage-provisioner=true in profile "multinode-874778"
	I1025 21:30:12.030492  105113 config.go:182] Loaded profile config "multinode-874778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:30:12.030505  105113 addons.go:231] Setting addon storage-provisioner=true in "multinode-874778"
	I1025 21:30:12.030495  105113 addons.go:69] Setting default-storageclass=true in profile "multinode-874778"
	I1025 21:30:12.030535  105113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-874778"
	I1025 21:30:12.030564  105113 host.go:66] Checking if "multinode-874778" exists ...
	I1025 21:30:12.030600  105113 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:30:12.030924  105113 cli_runner.go:164] Run: docker container inspect multinode-874778 --format={{.State.Status}}
	I1025 21:30:12.030997  105113 cli_runner.go:164] Run: docker container inspect multinode-874778 --format={{.State.Status}}
	I1025 21:30:12.030934  105113 kapi.go:59] client config for multinode-874778: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.key", CAFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:30:12.031700  105113 cert_rotation.go:137] Starting client certificate rotation controller
	I1025 21:30:12.031950  105113 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 21:30:12.031964  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:12.031972  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:12.031978  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:12.042446  105113 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1025 21:30:12.042470  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:12.042480  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:12 GMT
	I1025 21:30:12.042488  105113 round_trippers.go:580]     Audit-Id: eb8bd9c1-ba13-47bb-9e91-dc1e990aee88
	I1025 21:30:12.042497  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:12.042505  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:12.042512  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:12.042520  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:12.042530  105113 round_trippers.go:580]     Content-Length: 291
	I1025 21:30:12.042558  105113 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bae3cddd-1c77-4771-90f1-9a4c1aff3e13","resourceVersion":"367","creationTimestamp":"2023-10-25T21:29:58Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 21:30:12.043047  105113 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bae3cddd-1c77-4771-90f1-9a4c1aff3e13","resourceVersion":"367","creationTimestamp":"2023-10-25T21:29:58Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 21:30:12.043130  105113 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 21:30:12.043138  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:12.043149  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:12.043158  105113 round_trippers.go:473]     Content-Type: application/json
	I1025 21:30:12.043165  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:12.049070  105113 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 21:30:12.049089  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:12.049098  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:12 GMT
	I1025 21:30:12.049106  105113 round_trippers.go:580]     Audit-Id: 1875589d-2c39-4c6e-ab5b-6549dad95a9d
	I1025 21:30:12.049115  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:12.049125  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:12.049133  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:12.049142  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:12.049148  105113 round_trippers.go:580]     Content-Length: 291
	I1025 21:30:12.049166  105113 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bae3cddd-1c77-4771-90f1-9a4c1aff3e13","resourceVersion":"369","creationTimestamp":"2023-10-25T21:29:58Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 21:30:12.049283  105113 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 21:30:12.049292  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:12.049299  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:12.049305  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:12.051857  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:12.051879  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:12.051889  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:12.051896  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:12.051904  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:12.051913  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:12.051924  105113 round_trippers.go:580]     Content-Length: 291
	I1025 21:30:12.051935  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:12 GMT
	I1025 21:30:12.051943  105113 round_trippers.go:580]     Audit-Id: 67790cef-196f-4410-ac2a-60d5dc840599
	I1025 21:30:12.051964  105113 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bae3cddd-1c77-4771-90f1-9a4c1aff3e13","resourceVersion":"369","creationTimestamp":"2023-10-25T21:29:58Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 21:30:12.052059  105113 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-874778" context rescaled to 1 replicas
	I1025 21:30:12.052091  105113 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:30:12.053897  105113 out.go:177] * Verifying Kubernetes components...
	I1025 21:30:12.055486  105113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:30:12.057305  105113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:30:12.056207  105113 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:30:12.058711  105113 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:30:12.058725  105113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 21:30:12.058776  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:30:12.058891  105113 kapi.go:59] client config for multinode-874778: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.key", CAFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:30:12.059171  105113 addons.go:231] Setting addon default-storageclass=true in "multinode-874778"
	I1025 21:30:12.059199  105113 host.go:66] Checking if "multinode-874778" exists ...
	I1025 21:30:12.059662  105113 cli_runner.go:164] Run: docker container inspect multinode-874778 --format={{.State.Status}}
	I1025 21:30:12.081068  105113 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 21:30:12.081093  105113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 21:30:12.081147  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:30:12.082274  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa Username:docker}
	I1025 21:30:12.099659  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa Username:docker}
	I1025 21:30:12.131068  105113 command_runner.go:130] > apiVersion: v1
	I1025 21:30:12.131089  105113 command_runner.go:130] > data:
	I1025 21:30:12.131096  105113 command_runner.go:130] >   Corefile: |
	I1025 21:30:12.131102  105113 command_runner.go:130] >     .:53 {
	I1025 21:30:12.131109  105113 command_runner.go:130] >         errors
	I1025 21:30:12.131118  105113 command_runner.go:130] >         health {
	I1025 21:30:12.131125  105113 command_runner.go:130] >            lameduck 5s
	I1025 21:30:12.131131  105113 command_runner.go:130] >         }
	I1025 21:30:12.131137  105113 command_runner.go:130] >         ready
	I1025 21:30:12.131154  105113 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1025 21:30:12.131164  105113 command_runner.go:130] >            pods insecure
	I1025 21:30:12.131174  105113 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1025 21:30:12.131188  105113 command_runner.go:130] >            ttl 30
	I1025 21:30:12.131194  105113 command_runner.go:130] >         }
	I1025 21:30:12.131203  105113 command_runner.go:130] >         prometheus :9153
	I1025 21:30:12.131212  105113 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1025 21:30:12.131220  105113 command_runner.go:130] >            max_concurrent 1000
	I1025 21:30:12.131232  105113 command_runner.go:130] >         }
	I1025 21:30:12.131245  105113 command_runner.go:130] >         cache 30
	I1025 21:30:12.131256  105113 command_runner.go:130] >         loop
	I1025 21:30:12.131264  105113 command_runner.go:130] >         reload
	I1025 21:30:12.131274  105113 command_runner.go:130] >         loadbalance
	I1025 21:30:12.131280  105113 command_runner.go:130] >     }
	I1025 21:30:12.131287  105113 command_runner.go:130] > kind: ConfigMap
	I1025 21:30:12.131297  105113 command_runner.go:130] > metadata:
	I1025 21:30:12.131308  105113 command_runner.go:130] >   creationTimestamp: "2023-10-25T21:29:58Z"
	I1025 21:30:12.131318  105113 command_runner.go:130] >   name: coredns
	I1025 21:30:12.131325  105113 command_runner.go:130] >   namespace: kube-system
	I1025 21:30:12.131336  105113 command_runner.go:130] >   resourceVersion: "265"
	I1025 21:30:12.131345  105113 command_runner.go:130] >   uid: d3cba8af-80b1-4a34-b5a1-2bbc42153dad
	I1025 21:30:12.134307  105113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 21:30:12.134703  105113 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:30:12.135058  105113 kapi.go:59] client config for multinode-874778: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.key", CAFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:30:12.135330  105113 node_ready.go:35] waiting up to 6m0s for node "multinode-874778" to be "Ready" ...
	I1025 21:30:12.135500  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:12.135583  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:12.135608  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:12.135623  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:12.138088  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:12.138107  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:12.138120  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:12.138131  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:12.138138  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:12.138150  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:12.138159  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:12 GMT
	I1025 21:30:12.138167  105113 round_trippers.go:580]     Audit-Id: c489d4d7-d922-4e2f-b166-b4b61fd2e722
	I1025 21:30:12.138295  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:12.139066  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:12.139078  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:12.139089  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:12.139098  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:12.149900  105113 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1025 21:30:12.149987  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:12.150018  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:12.150049  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:12.150076  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:12.150094  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:12.150111  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:12 GMT
	I1025 21:30:12.150128  105113 round_trippers.go:580]     Audit-Id: 5f7333e7-7eff-44e8-a621-2ac0a8b2148d
	I1025 21:30:12.150271  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:12.249404  105113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 21:30:12.250053  105113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:30:12.651534  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:12.651560  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:12.651572  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:12.651584  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:12.655637  105113 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 21:30:12.655666  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:12.655676  105113 round_trippers.go:580]     Audit-Id: baca696b-d246-4693-8a97-ef26c0f66d27
	I1025 21:30:12.655684  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:12.655691  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:12.655698  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:12.655707  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:12.655716  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:12 GMT
	I1025 21:30:12.655935  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:12.947304  105113 command_runner.go:130] > configmap/coredns replaced
	I1025 21:30:12.951465  105113 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1025 21:30:12.955354  105113 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1025 21:30:12.959832  105113 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1025 21:30:12.959849  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:12.959859  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:12.959867  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:13.027905  105113 round_trippers.go:574] Response Status: 200 OK in 68 milliseconds
	I1025 21:30:13.027929  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:13.027940  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:13.027948  105113 round_trippers.go:580]     Content-Length: 1273
	I1025 21:30:13.027956  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:13 GMT
	I1025 21:30:13.027965  105113 round_trippers.go:580]     Audit-Id: 022b31d8-9356-49cc-a41d-d9f872bd4df7
	I1025 21:30:13.027973  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:13.027985  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:13.028002  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:13.028105  105113 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"399"},"items":[{"metadata":{"name":"standard","uid":"b72851c6-ceb5-437d-843f-f9bf9a4889ac","resourceVersion":"399","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1025 21:30:13.028598  105113 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b72851c6-ceb5-437d-843f-f9bf9a4889ac","resourceVersion":"399","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 21:30:13.028677  105113 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1025 21:30:13.028692  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:13.028704  105113 round_trippers.go:473]     Content-Type: application/json
	I1025 21:30:13.028714  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:13.028729  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:13.032001  105113 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 21:30:13.032030  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:13.032042  105113 round_trippers.go:580]     Audit-Id: b957c93e-9dc7-43ce-aba2-d9a833071a0f
	I1025 21:30:13.032050  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:13.032060  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:13.032076  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:13.032092  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:13.032107  105113 round_trippers.go:580]     Content-Length: 1220
	I1025 21:30:13.032121  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:13 GMT
	I1025 21:30:13.032178  105113 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b72851c6-ceb5-437d-843f-f9bf9a4889ac","resourceVersion":"399","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 21:30:13.151390  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:13.151414  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:13.151427  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:13.151437  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:13.153701  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:13.153718  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:13.153725  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:13.153730  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:13.153735  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:13.153740  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:13.153746  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:13 GMT
	I1025 21:30:13.153751  105113 round_trippers.go:580]     Audit-Id: 2876ea04-26e6-44c7-9d09-f8d2fcea3206
	I1025 21:30:13.153919  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:13.179613  105113 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1025 21:30:13.184064  105113 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1025 21:30:13.191080  105113 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1025 21:30:13.196903  105113 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1025 21:30:13.202428  105113 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1025 21:30:13.210230  105113 command_runner.go:130] > pod/storage-provisioner created
	I1025 21:30:13.217420  105113 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1025 21:30:13.218614  105113 addons.go:502] enable addons completed in 1.188227649s: enabled=[default-storageclass storage-provisioner]
	I1025 21:30:13.651250  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:13.651271  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:13.651279  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:13.651286  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:13.653609  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:13.653630  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:13.653639  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:13.653647  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:13.653654  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:13 GMT
	I1025 21:30:13.653662  105113 round_trippers.go:580]     Audit-Id: 7d88c6f3-27f3-403e-9b0a-66976b278d4e
	I1025 21:30:13.653670  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:13.653679  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:13.653787  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:14.150956  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:14.150976  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:14.150984  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:14.150990  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:14.153088  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:14.153106  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:14.153113  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:14.153118  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:14.153124  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:14 GMT
	I1025 21:30:14.153129  105113 round_trippers.go:580]     Audit-Id: 90990e9d-5a2a-4d40-995b-c6a32c2a0f9f
	I1025 21:30:14.153133  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:14.153139  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:14.153265  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:14.153587  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:14.651536  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:14.651556  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:14.651564  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:14.651571  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:14.653703  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:14.653725  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:14.653733  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:14 GMT
	I1025 21:30:14.653738  105113 round_trippers.go:580]     Audit-Id: 1fe213dd-bf54-4b7b-b0be-41c340b2484d
	I1025 21:30:14.653744  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:14.653751  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:14.653759  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:14.653770  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:14.653917  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:15.151507  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:15.151526  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:15.151534  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:15.151540  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:15.153724  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:15.153741  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:15.153748  105113 round_trippers.go:580]     Audit-Id: 3239b6b6-279b-4bd2-bdf7-455595535cfc
	I1025 21:30:15.153753  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:15.153758  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:15.153763  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:15.153768  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:15.153773  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:15 GMT
	I1025 21:30:15.153927  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:15.651548  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:15.651574  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:15.651585  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:15.651594  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:15.653986  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:15.655949  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:15.655960  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:15.655967  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:15.655972  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:15.655978  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:15.655984  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:15 GMT
	I1025 21:30:15.655991  105113 round_trippers.go:580]     Audit-Id: 23cae158-97a7-4deb-a7f0-899178191f0a
	I1025 21:30:15.656113  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:16.151740  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:16.151760  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:16.151768  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:16.151775  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:16.153979  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:16.153997  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:16.154004  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:16.154010  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:16.154018  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:16 GMT
	I1025 21:30:16.154028  105113 round_trippers.go:580]     Audit-Id: 6181db0f-b0cd-4836-9c9a-72cb7302eca9
	I1025 21:30:16.154036  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:16.154045  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:16.154169  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:16.154491  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:16.651521  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:16.651542  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:16.651550  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:16.651556  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:16.653738  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:16.653762  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:16.653770  105113 round_trippers.go:580]     Audit-Id: a28ef905-0c28-4371-b216-825df881ac1e
	I1025 21:30:16.653778  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:16.653786  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:16.653793  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:16.653802  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:16.653818  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:16 GMT
	I1025 21:30:16.653924  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:17.151518  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:17.151537  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:17.151545  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:17.151551  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:17.153773  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:17.153796  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:17.153806  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:17 GMT
	I1025 21:30:17.153814  105113 round_trippers.go:580]     Audit-Id: 2091c3b6-9a11-48c1-8eed-fb9dfca2b5f6
	I1025 21:30:17.153822  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:17.153830  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:17.153838  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:17.153848  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:17.153984  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:17.651512  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:17.651533  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:17.651541  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:17.651546  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:17.653751  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:17.653776  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:17.653788  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:17 GMT
	I1025 21:30:17.653796  105113 round_trippers.go:580]     Audit-Id: 43b979e1-bd81-4a8b-bf01-ab2f25ece139
	I1025 21:30:17.653805  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:17.653813  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:17.653825  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:17.653833  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:17.653995  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:18.151513  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:18.151536  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:18.151549  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:18.151558  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:18.153645  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:18.153667  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:18.153676  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:18.153682  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:18 GMT
	I1025 21:30:18.153687  105113 round_trippers.go:580]     Audit-Id: 70e767ba-295c-4c03-a59c-81d23859762d
	I1025 21:30:18.153692  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:18.153697  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:18.153702  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:18.153828  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:18.651507  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:18.651528  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:18.651535  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:18.651541  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:18.653691  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:18.653717  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:18.653724  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:18 GMT
	I1025 21:30:18.653729  105113 round_trippers.go:580]     Audit-Id: 429966bd-260d-4885-b5eb-44801263fdbe
	I1025 21:30:18.653734  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:18.653740  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:18.653745  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:18.653750  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:18.653944  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:18.654396  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:19.151549  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:19.151568  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:19.151575  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:19.151582  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:19.153739  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:19.153758  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:19.153769  105113 round_trippers.go:580]     Audit-Id: a1a9152b-0b59-49b2-be6f-0faf5c5f4aa9
	I1025 21:30:19.153777  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:19.153786  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:19.153799  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:19.153811  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:19.153823  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:19 GMT
	I1025 21:30:19.153965  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:19.651549  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:19.651571  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:19.651579  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:19.651586  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:19.653708  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:19.653728  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:19.653735  105113 round_trippers.go:580]     Audit-Id: f85ac093-af59-46af-9c87-f835e2a1cbb6
	I1025 21:30:19.653742  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:19.653752  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:19.653760  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:19.653768  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:19.653784  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:19 GMT
	I1025 21:30:19.653956  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:20.151571  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:20.151594  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:20.151602  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:20.151608  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:20.153993  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:20.154026  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:20.154036  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:20 GMT
	I1025 21:30:20.154046  105113 round_trippers.go:580]     Audit-Id: cc2fbd00-2586-4a8b-9056-22bf5bb0aad5
	I1025 21:30:20.154055  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:20.154063  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:20.154072  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:20.154080  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:20.154213  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:20.650879  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:20.650902  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:20.650914  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:20.650927  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:20.653134  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:20.653155  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:20.655641  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:20.655652  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:20 GMT
	I1025 21:30:20.655658  105113 round_trippers.go:580]     Audit-Id: 2d86c3b8-ce2c-48d1-9cdf-928308eade64
	I1025 21:30:20.655666  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:20.655672  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:20.655680  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:20.655804  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:20.656139  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:21.150954  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:21.150975  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:21.150984  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:21.150991  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:21.153280  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:21.153320  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:21.153331  105113 round_trippers.go:580]     Audit-Id: fc0e705f-2736-4f9c-be33-63c69e050d15
	I1025 21:30:21.153340  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:21.153354  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:21.153378  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:21.153388  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:21.153394  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:21 GMT
	I1025 21:30:21.153546  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:21.650971  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:21.650994  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:21.651005  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:21.651014  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:21.653110  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:21.653135  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:21.653145  105113 round_trippers.go:580]     Audit-Id: 04675f57-0fd9-446b-9ac1-ecee7d8353d0
	I1025 21:30:21.653153  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:21.653159  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:21.653163  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:21.653169  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:21.653174  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:21 GMT
	I1025 21:30:21.653313  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:22.150904  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:22.150924  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:22.150932  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:22.150938  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:22.153099  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:22.153122  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:22.153133  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:22.153142  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:22.153152  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:22 GMT
	I1025 21:30:22.153162  105113 round_trippers.go:580]     Audit-Id: 20bcef7e-975a-44c6-b070-f258b5235d81
	I1025 21:30:22.153171  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:22.153183  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:22.153302  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:22.651557  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:22.651578  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:22.651586  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:22.651592  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:22.654101  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:22.654125  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:22.654139  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:22.654148  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:22.654157  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:22.654164  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:22.654169  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:22 GMT
	I1025 21:30:22.654175  105113 round_trippers.go:580]     Audit-Id: 49f7d126-b0d7-4dcc-a0f6-ed166d902134
	I1025 21:30:22.654332  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:23.151618  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:23.151637  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:23.151645  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:23.151651  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:23.153876  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:23.153897  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:23.153907  105113 round_trippers.go:580]     Audit-Id: 6d2bf8f4-d085-45e1-9153-1efcbbb5d7fa
	I1025 21:30:23.153914  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:23.153922  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:23.153929  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:23.153936  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:23.153943  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:23 GMT
	I1025 21:30:23.154089  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:23.154500  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:23.651502  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:23.651522  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:23.651530  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:23.651535  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:23.653758  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:23.653775  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:23.653781  105113 round_trippers.go:580]     Audit-Id: ae55a5e7-67c1-4a00-8dd0-09dd58a35be3
	I1025 21:30:23.653787  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:23.653792  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:23.653796  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:23.653802  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:23.653810  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:23 GMT
	I1025 21:30:23.654022  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:24.151531  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:24.151555  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:24.151568  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:24.151578  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:24.154005  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:24.154022  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:24.154028  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:24.154034  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:24 GMT
	I1025 21:30:24.154039  105113 round_trippers.go:580]     Audit-Id: 2a67f036-5831-478e-93a7-4108e836e617
	I1025 21:30:24.154044  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:24.154049  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:24.154054  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:24.154158  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:24.651546  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:24.651571  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:24.651584  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:24.651595  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:24.653838  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:24.653863  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:24.653875  105113 round_trippers.go:580]     Audit-Id: 1fc285b6-a455-4eae-8cec-d4af8de74573
	I1025 21:30:24.653890  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:24.653899  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:24.653908  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:24.653918  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:24.653927  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:24 GMT
	I1025 21:30:24.654043  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:25.151557  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:25.151581  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:25.151590  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:25.151596  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:25.153757  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:25.153782  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:25.153793  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:25.153805  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:25.153814  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:25.153824  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:25 GMT
	I1025 21:30:25.153837  105113 round_trippers.go:580]     Audit-Id: a0d5038e-c669-4a63-99cd-3adfc0e7d149
	I1025 21:30:25.153850  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:25.153967  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:25.651568  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:25.651592  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:25.651601  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:25.651611  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:25.653935  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:25.656317  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:25.656328  105113 round_trippers.go:580]     Audit-Id: a7ce61cd-5dfd-4f98-beba-00b14a031927
	I1025 21:30:25.656334  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:25.656340  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:25.656345  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:25.656353  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:25.656359  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:25 GMT
	I1025 21:30:25.656477  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:25.656783  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:26.151770  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:26.151789  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:26.151797  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:26.151803  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:26.153821  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:26.153844  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:26.153854  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:26 GMT
	I1025 21:30:26.153863  105113 round_trippers.go:580]     Audit-Id: 7c509555-e90d-4bbd-ac56-73abd09d1c47
	I1025 21:30:26.153871  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:26.153879  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:26.153886  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:26.153898  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:26.154036  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:26.651507  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:26.651529  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:26.651537  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:26.651543  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:26.653704  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:26.653722  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:26.653729  105113 round_trippers.go:580]     Audit-Id: 4f01f0bc-0c04-4fbb-8ff4-d44734dbb965
	I1025 21:30:26.653734  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:26.653740  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:26.653747  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:26.653757  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:26.653768  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:26 GMT
	I1025 21:30:26.653896  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:27.151555  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:27.151577  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:27.151585  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:27.151591  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:27.153731  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:27.153754  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:27.153761  105113 round_trippers.go:580]     Audit-Id: d1f457e5-3fa1-43aa-870e-94aac30e1ddc
	I1025 21:30:27.153770  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:27.153778  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:27.153785  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:27.153792  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:27.153800  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:27 GMT
	I1025 21:30:27.153917  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:27.651516  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:27.651536  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:27.651544  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:27.651556  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:27.653623  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:27.653645  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:27.653656  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:27.653666  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:27 GMT
	I1025 21:30:27.653674  105113 round_trippers.go:580]     Audit-Id: b757e79d-c3e8-4124-a0f3-bef1a321bd8c
	I1025 21:30:27.653688  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:27.653700  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:27.653708  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:27.653876  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:28.151411  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:28.151433  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:28.151443  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:28.151451  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:28.153540  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:28.153562  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:28.153569  105113 round_trippers.go:580]     Audit-Id: 51270be3-ea3e-4a26-8d85-831856e3c395
	I1025 21:30:28.153575  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:28.153580  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:28.153585  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:28.153590  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:28.153595  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:28 GMT
	I1025 21:30:28.153707  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:28.153996  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:28.651261  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:28.651283  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:28.651293  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:28.651301  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:28.653336  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:28.653353  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:28.653361  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:28.653370  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:28.653378  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:28 GMT
	I1025 21:30:28.653385  105113 round_trippers.go:580]     Audit-Id: 8ad30cab-24e3-412b-8853-87b494013f83
	I1025 21:30:28.653393  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:28.653400  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:28.653600  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:29.151113  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:29.151135  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:29.151148  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:29.151157  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:29.153227  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:29.153247  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:29.153253  105113 round_trippers.go:580]     Audit-Id: 24dc4dbe-c453-479c-a39b-02a53f5c651d
	I1025 21:30:29.153258  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:29.153266  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:29.153275  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:29.153283  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:29.153290  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:29 GMT
	I1025 21:30:29.153412  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:29.650975  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:29.650997  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:29.651005  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:29.651010  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:29.653209  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:29.653229  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:29.653239  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:29.653248  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:29.653255  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:29 GMT
	I1025 21:30:29.653263  105113 round_trippers.go:580]     Audit-Id: d290c215-b741-4309-99d5-11d47599310c
	I1025 21:30:29.653272  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:29.653281  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:29.653419  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:30.151519  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:30.151541  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:30.151549  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:30.151554  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:30.153800  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:30.153822  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:30.153830  105113 round_trippers.go:580]     Audit-Id: ffb8de2b-8ad0-40b7-9f53-8eadace87e4b
	I1025 21:30:30.153838  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:30.153845  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:30.153854  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:30.153876  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:30.153887  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:30 GMT
	I1025 21:30:30.154030  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:30.154398  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:30.651685  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:30.651706  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:30.651714  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:30.651721  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:30.653926  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:30.656425  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:30.656439  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:30.656450  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:30.656462  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:30 GMT
	I1025 21:30:30.656477  105113 round_trippers.go:580]     Audit-Id: 759ac3e9-3b4f-4902-8082-36fbb515528b
	I1025 21:30:30.656491  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:30.656504  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:30.656641  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:31.151643  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:31.151663  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:31.151672  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:31.151677  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:31.153789  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:31.153813  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:31.153822  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:31.153830  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:31.153838  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:31.153846  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:31 GMT
	I1025 21:30:31.153858  105113 round_trippers.go:580]     Audit-Id: 2f99232c-9155-4d95-9b47-1f96cc04f2eb
	I1025 21:30:31.153871  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:31.154004  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:31.651653  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:31.651673  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:31.651681  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:31.651687  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:31.653862  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:31.653885  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:31.653892  105113 round_trippers.go:580]     Audit-Id: 19bd2a45-5373-4a9d-9dbf-061f56fa4fd8
	I1025 21:30:31.653898  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:31.653903  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:31.653908  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:31.653913  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:31.653920  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:31 GMT
	I1025 21:30:31.654080  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:32.150928  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:32.150947  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:32.150955  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:32.150961  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:32.153241  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:32.153260  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:32.153268  105113 round_trippers.go:580]     Audit-Id: 10293a24-3f5e-4d7d-8e30-fbd8f4f23fa5
	I1025 21:30:32.153275  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:32.153284  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:32.153292  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:32.153305  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:32.153322  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:32 GMT
	I1025 21:30:32.153469  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:32.650991  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:32.651011  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:32.651019  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:32.651025  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:32.653124  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:32.653140  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:32.653147  105113 round_trippers.go:580]     Audit-Id: 31cea42c-1eda-4cdd-91bb-c27c4d1a9584
	I1025 21:30:32.653152  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:32.653159  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:32.653167  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:32.653175  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:32.653193  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:32 GMT
	I1025 21:30:32.653334  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:32.653665  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:33.151517  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:33.151537  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:33.151549  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:33.151563  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:33.153704  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:33.153725  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:33.153735  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:33 GMT
	I1025 21:30:33.153743  105113 round_trippers.go:580]     Audit-Id: ad8edf57-dcc2-4a8d-85c4-e6ec88423a02
	I1025 21:30:33.153752  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:33.153761  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:33.153769  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:33.153776  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:33.153893  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:33.651620  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:33.651646  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:33.651654  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:33.651661  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:33.653787  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:33.653804  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:33.653811  105113 round_trippers.go:580]     Audit-Id: 29869d5d-6c6d-4105-8eb2-657c6d3211c9
	I1025 21:30:33.653816  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:33.653821  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:33.653826  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:33.653834  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:33.653844  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:33 GMT
	I1025 21:30:33.654022  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:34.151547  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:34.151569  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:34.151577  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:34.151598  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:34.153825  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:34.153843  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:34.153850  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:34.153855  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:34.153860  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:34 GMT
	I1025 21:30:34.153865  105113 round_trippers.go:580]     Audit-Id: 60959890-b1fe-4567-af8e-35cacd795b57
	I1025 21:30:34.153870  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:34.153875  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:34.154041  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:34.651706  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:34.651729  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:34.651738  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:34.651744  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:34.653996  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:34.654018  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:34.654028  105113 round_trippers.go:580]     Audit-Id: b41fc528-9432-4c6d-83f9-70ab5e6c7620
	I1025 21:30:34.654042  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:34.654050  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:34.654058  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:34.654067  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:34.654072  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:34 GMT
	I1025 21:30:34.654228  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:34.654548  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:35.151792  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:35.151811  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:35.151819  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:35.151825  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:35.154031  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:35.154047  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:35.154056  105113 round_trippers.go:580]     Audit-Id: fbc3de33-890a-4ae4-a2f2-a6a5ea5e0502
	I1025 21:30:35.154062  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:35.154067  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:35.154072  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:35.154077  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:35.154093  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:35 GMT
	I1025 21:30:35.154218  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:35.651522  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:35.651542  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:35.651549  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:35.651556  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:35.653608  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:35.653626  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:35.653633  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:35.653638  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:35 GMT
	I1025 21:30:35.653647  105113 round_trippers.go:580]     Audit-Id: ce82e2e2-9db5-4ee9-9db3-8c751862f27f
	I1025 21:30:35.653655  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:35.653663  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:35.653673  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:35.656193  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:36.151390  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:36.151411  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:36.151419  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:36.151425  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:36.153603  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:36.153626  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:36.153635  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:36.153649  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:36.153661  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:36 GMT
	I1025 21:30:36.153668  105113 round_trippers.go:580]     Audit-Id: daf25a23-ea72-4653-80db-2a3537a292c1
	I1025 21:30:36.153680  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:36.153689  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:36.153813  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:36.651367  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:36.651386  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:36.651395  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:36.651400  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:36.653565  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:36.653587  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:36.653596  105113 round_trippers.go:580]     Audit-Id: c2f4f48f-29c3-4dcf-8a5a-d30982ccd5ee
	I1025 21:30:36.653604  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:36.653612  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:36.653626  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:36.653634  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:36.653653  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:36 GMT
	I1025 21:30:36.653773  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:37.151278  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:37.151299  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:37.151307  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:37.151313  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:37.153321  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:30:37.153340  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:37.153347  105113 round_trippers.go:580]     Audit-Id: a97dc26c-e37a-4eeb-a66e-3f672739c51c
	I1025 21:30:37.153353  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:37.153358  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:37.153363  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:37.153368  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:37.153373  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:37 GMT
	I1025 21:30:37.153547  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:37.153999  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:37.651071  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:37.651091  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:37.651099  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:37.651106  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:37.653185  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:37.653205  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:37.653212  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:37.653218  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:37.653223  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:37 GMT
	I1025 21:30:37.653228  105113 round_trippers.go:580]     Audit-Id: e133b425-f998-4216-be80-75a08e6de63f
	I1025 21:30:37.653233  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:37.653238  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:37.653385  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:38.150968  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:38.150988  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:38.151000  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:38.151007  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:38.153336  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:38.153359  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:38.153369  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:38.153378  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:38.153387  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:38.153395  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:38 GMT
	I1025 21:30:38.153408  105113 round_trippers.go:580]     Audit-Id: 9fe47afd-0ba9-4f72-a11a-348d4c3439ba
	I1025 21:30:38.153419  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:38.153556  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:38.651112  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:38.651142  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:38.651152  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:38.651160  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:38.653364  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:38.653384  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:38.653393  105113 round_trippers.go:580]     Audit-Id: f8864aa4-4958-4ac3-a592-54b6f962f1cb
	I1025 21:30:38.653401  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:38.653409  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:38.653417  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:38.653424  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:38.653433  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:38 GMT
	I1025 21:30:38.653577  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:39.150970  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:39.150994  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:39.151006  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:39.151023  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:39.153190  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:39.153208  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:39.153214  105113 round_trippers.go:580]     Audit-Id: 001fe6bf-6d85-48ad-a403-661589c4bf7a
	I1025 21:30:39.153220  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:39.153225  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:39.153233  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:39.153241  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:39.153249  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:39 GMT
	I1025 21:30:39.153447  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:39.650986  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:39.651015  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:39.651026  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:39.651034  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:39.653227  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:39.653248  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:39.653255  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:39.653261  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:39.653266  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:39 GMT
	I1025 21:30:39.653271  105113 round_trippers.go:580]     Audit-Id: 1b6b1cd7-368a-4cab-a3c2-11b95e7100f6
	I1025 21:30:39.653276  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:39.653281  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:39.653411  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:39.653738  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:40.151031  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:40.151055  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:40.151063  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:40.151071  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:40.153285  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:40.153305  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:40.153314  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:40.153322  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:40.153329  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:40 GMT
	I1025 21:30:40.153338  105113 round_trippers.go:580]     Audit-Id: ecf8141b-e675-48ca-9d69-5cf8825704d2
	I1025 21:30:40.153348  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:40.153359  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:40.153526  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:40.651327  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:40.651348  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:40.651356  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:40.651362  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:40.653738  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:40.655676  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:40.655688  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:40.655695  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:40 GMT
	I1025 21:30:40.655703  105113 round_trippers.go:580]     Audit-Id: 4091466b-ddb3-43f4-92ec-ac8ce8c93a60
	I1025 21:30:40.655709  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:40.655722  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:40.655727  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:40.655843  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:41.151515  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:41.151535  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:41.151543  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:41.151549  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:41.153904  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:41.153930  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:41.153947  105113 round_trippers.go:580]     Audit-Id: e797ebc5-db98-4b57-bb7b-3234e4bfc24c
	I1025 21:30:41.153957  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:41.153969  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:41.153978  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:41.153990  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:41.153999  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:41 GMT
	I1025 21:30:41.154131  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:41.651556  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:41.651580  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:41.651590  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:41.651599  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:41.653856  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:41.653881  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:41.653890  105113 round_trippers.go:580]     Audit-Id: 2d859904-5ac6-4dff-8abb-2120978306d1
	I1025 21:30:41.653898  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:41.653905  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:41.653912  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:41.653928  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:41.653938  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:41 GMT
	I1025 21:30:41.654068  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:41.654396  105113 node_ready.go:58] node "multinode-874778" has status "Ready":"False"
	I1025 21:30:42.151921  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:42.151946  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:42.151960  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:42.151970  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:42.154018  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:42.154048  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:42.154054  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:42.154060  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:42 GMT
	I1025 21:30:42.154065  105113 round_trippers.go:580]     Audit-Id: c49f9c80-5548-4058-afdc-0fcf0e05c848
	I1025 21:30:42.154070  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:42.154075  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:42.154083  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:42.154264  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:42.651539  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:42.651558  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:42.651566  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:42.651572  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:42.653781  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:42.653802  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:42.653809  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:42.653814  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:42.653819  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:42.653824  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:42 GMT
	I1025 21:30:42.653829  105113 round_trippers.go:580]     Audit-Id: c3b2c590-7bfc-4e99-9b8d-a8a7aac72ce5
	I1025 21:30:42.653835  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:42.653989  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:43.151658  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:43.151679  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:43.151687  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:43.151693  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:43.154128  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:43.154149  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:43.154159  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:43 GMT
	I1025 21:30:43.154168  105113 round_trippers.go:580]     Audit-Id: ec64670e-a2d6-4d74-afae-79263efb7cd2
	I1025 21:30:43.154177  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:43.154192  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:43.154204  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:43.154213  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:43.154359  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"344","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1025 21:30:43.651540  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:43.651564  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:43.651578  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:43.651587  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:43.653720  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:43.653738  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:43.653745  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:43.653751  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:43 GMT
	I1025 21:30:43.653755  105113 round_trippers.go:580]     Audit-Id: b5011656-a319-450b-ab8c-392a0983acce
	I1025 21:30:43.653761  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:43.653765  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:43.653770  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:43.653903  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:30:43.654202  105113 node_ready.go:49] node "multinode-874778" has status "Ready":"True"
	I1025 21:30:43.654215  105113 node_ready.go:38] duration metric: took 31.518857946s waiting for node "multinode-874778" to be "Ready" ...
	I1025 21:30:43.654224  105113 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:30:43.654293  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1025 21:30:43.654301  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:43.654308  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:43.654313  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:43.657021  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:43.657038  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:43.657049  105113 round_trippers.go:580]     Audit-Id: 0e2b857e-2e3c-4276-92ca-22f6b2f51f51
	I1025 21:30:43.657054  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:43.657060  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:43.657066  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:43.657072  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:43.657081  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:43 GMT
	I1025 21:30:43.657431  105113 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knfr2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aaf2ddcd-6832-4476-b04f-12e4fdd933b8","resourceVersion":"430","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"141ac871-bb5e-4c1b-8ac4-12316f895547","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"141ac871-bb5e-4c1b-8ac4-12316f895547\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55533 chars]
	I1025 21:30:43.660302  105113 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-knfr2" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:43.660367  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knfr2
	I1025 21:30:43.660374  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:43.660381  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:43.660389  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:43.661959  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:30:43.661978  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:43.661987  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:43.661994  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:43.662005  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:43.662016  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:43 GMT
	I1025 21:30:43.662027  105113 round_trippers.go:580]     Audit-Id: 6073fcaa-2c99-40ac-8418-4c534e695cda
	I1025 21:30:43.662039  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:43.662140  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knfr2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aaf2ddcd-6832-4476-b04f-12e4fdd933b8","resourceVersion":"430","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"141ac871-bb5e-4c1b-8ac4-12316f895547","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"141ac871-bb5e-4c1b-8ac4-12316f895547\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1025 21:30:43.662647  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:43.662663  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:43.662674  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:43.662684  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:43.664155  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:30:43.664173  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:43.664182  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:43.664190  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:43.664202  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:43.664213  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:43 GMT
	I1025 21:30:43.664224  105113 round_trippers.go:580]     Audit-Id: 9bbd1814-7a9c-46ba-acf3-57bfd1148de1
	I1025 21:30:43.664235  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:43.664328  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:30:43.664604  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knfr2
	I1025 21:30:43.664613  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:43.664620  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:43.664625  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:43.666112  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:30:43.666132  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:43.666141  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:43.666150  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:43 GMT
	I1025 21:30:43.666157  105113 round_trippers.go:580]     Audit-Id: 52c54438-ae18-43e5-a3ca-d9f9fb9aab1f
	I1025 21:30:43.666165  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:43.666174  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:43.666186  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:43.666403  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knfr2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aaf2ddcd-6832-4476-b04f-12e4fdd933b8","resourceVersion":"430","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"141ac871-bb5e-4c1b-8ac4-12316f895547","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"141ac871-bb5e-4c1b-8ac4-12316f895547\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1025 21:30:43.666745  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:43.666756  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:43.666766  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:43.666774  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:43.668250  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:30:43.668263  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:43.668268  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:43 GMT
	I1025 21:30:43.668273  105113 round_trippers.go:580]     Audit-Id: fcb36bcd-1cdc-439b-a216-9fb198483278
	I1025 21:30:43.668279  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:43.668283  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:43.668288  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:43.668293  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:43.668477  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:30:44.169269  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knfr2
	I1025 21:30:44.169289  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:44.169297  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:44.169303  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:44.171572  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:44.171592  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:44.171599  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:44.171609  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:44.171617  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:44.171625  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:44.171640  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:44 GMT
	I1025 21:30:44.171649  105113 round_trippers.go:580]     Audit-Id: e37ff62d-e531-4faa-89d2-c02fd7945b91
	I1025 21:30:44.171762  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knfr2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aaf2ddcd-6832-4476-b04f-12e4fdd933b8","resourceVersion":"442","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"141ac871-bb5e-4c1b-8ac4-12316f895547","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"141ac871-bb5e-4c1b-8ac4-12316f895547\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1025 21:30:44.172232  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:44.172245  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:44.172252  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:44.172260  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:44.173971  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:30:44.173985  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:44.173992  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:44.173997  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:44.174002  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:44 GMT
	I1025 21:30:44.174007  105113 round_trippers.go:580]     Audit-Id: ba65c0b9-729d-402c-b038-88c6a97607b3
	I1025 21:30:44.174015  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:44.174022  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:44.174135  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:30:44.174528  105113 pod_ready.go:92] pod "coredns-5dd5756b68-knfr2" in "kube-system" namespace has status "Ready":"True"
	I1025 21:30:44.174548  105113 pod_ready.go:81] duration metric: took 514.225117ms waiting for pod "coredns-5dd5756b68-knfr2" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:44.174562  105113 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:44.174623  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-874778
	I1025 21:30:44.174633  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:44.174643  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:44.174657  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:44.176285  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:30:44.176302  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:44.176311  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:44.176318  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:44 GMT
	I1025 21:30:44.176326  105113 round_trippers.go:580]     Audit-Id: 1f146855-acf1-4e89-bbe3-e806021bc169
	I1025 21:30:44.176334  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:44.176343  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:44.176355  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:44.176451  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-874778","namespace":"kube-system","uid":"732babe1-d90c-4663-bbbc-acbca47036e2","resourceVersion":"323","creationTimestamp":"2023-10-25T21:29:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0013ce498834fae8862b745666dfa45e","kubernetes.io/config.mirror":"0013ce498834fae8862b745666dfa45e","kubernetes.io/config.seen":"2023-10-25T21:29:58.930794088Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:29:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1025 21:30:44.176785  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:44.176799  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:44.176808  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:44.176817  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:44.179190  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:44.179204  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:44.179211  105113 round_trippers.go:580]     Audit-Id: 7c0f8bee-fe77-4d41-b032-4f4e05a53b26
	I1025 21:30:44.179216  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:44.179221  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:44.179226  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:44.179231  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:44.179236  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:44 GMT
	I1025 21:30:44.179398  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:30:44.179670  105113 pod_ready.go:92] pod "etcd-multinode-874778" in "kube-system" namespace has status "Ready":"True"
	I1025 21:30:44.179681  105113 pod_ready.go:81] duration metric: took 5.108402ms waiting for pod "etcd-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:44.179692  105113 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:44.179737  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-874778
	I1025 21:30:44.179745  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:44.179752  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:44.179757  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:44.181470  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:30:44.181484  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:44.181493  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:44.181501  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:44.181510  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:44 GMT
	I1025 21:30:44.181526  105113 round_trippers.go:580]     Audit-Id: 9f6396cf-d00d-433a-86c3-9493d048bb9f
	I1025 21:30:44.181531  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:44.181536  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:44.181693  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-874778","namespace":"kube-system","uid":"ef34869e-ca49-4a2c-96c7-7f7e9bc648d2","resourceVersion":"317","creationTimestamp":"2023-10-25T21:29:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"fb48415dd7ff02ca5565298cd5179555","kubernetes.io/config.mirror":"fb48415dd7ff02ca5565298cd5179555","kubernetes.io/config.seen":"2023-10-25T21:29:58.930800852Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:29:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1025 21:30:44.182125  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:44.182140  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:44.182147  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:44.182153  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:44.183694  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:30:44.183708  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:44.183715  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:44.183721  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:44 GMT
	I1025 21:30:44.183726  105113 round_trippers.go:580]     Audit-Id: e31a9435-3e23-4d23-a8ec-0b073417a718
	I1025 21:30:44.183731  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:44.183741  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:44.183752  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:44.183916  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:30:44.184176  105113 pod_ready.go:92] pod "kube-apiserver-multinode-874778" in "kube-system" namespace has status "Ready":"True"
	I1025 21:30:44.184191  105113 pod_ready.go:81] duration metric: took 4.491323ms waiting for pod "kube-apiserver-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:44.184200  105113 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:44.252498  105113 request.go:629] Waited for 68.241913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-874778
	I1025 21:30:44.252555  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-874778
	I1025 21:30:44.252560  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:44.252573  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:44.252590  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:44.254839  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:44.254858  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:44.254869  105113 round_trippers.go:580]     Audit-Id: e61983a3-66fd-43f6-a46d-575adaca7205
	I1025 21:30:44.254874  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:44.254880  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:44.254887  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:44.254895  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:44.254903  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:44 GMT
	I1025 21:30:44.255105  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-874778","namespace":"kube-system","uid":"29064f70-ec6c-4d84-ab29-55aa9fdf9013","resourceVersion":"315","creationTimestamp":"2023-10-25T21:29:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fbcd628f4f0d0a61cdf2115088b35d26","kubernetes.io/config.mirror":"fbcd628f4f0d0a61cdf2115088b35d26","kubernetes.io/config.seen":"2023-10-25T21:29:53.419053110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:29:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1025 21:30:44.451919  105113 request.go:629] Waited for 196.349875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:44.451969  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:44.451974  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:44.451982  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:44.451987  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:44.454087  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:44.454109  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:44.454120  105113 round_trippers.go:580]     Audit-Id: 9aeff59b-f0b5-4ef8-aec5-076a4ad28518
	I1025 21:30:44.454129  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:44.454138  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:44.454146  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:44.454153  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:44.454166  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:44 GMT
	I1025 21:30:44.454303  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:30:44.454634  105113 pod_ready.go:92] pod "kube-controller-manager-multinode-874778" in "kube-system" namespace has status "Ready":"True"
	I1025 21:30:44.454651  105113 pod_ready.go:81] duration metric: took 270.440159ms waiting for pod "kube-controller-manager-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:44.454665  105113 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-msn2q" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:44.652082  105113 request.go:629] Waited for 197.349321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msn2q
	I1025 21:30:44.652154  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msn2q
	I1025 21:30:44.652161  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:44.652170  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:44.652186  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:44.654655  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:44.654677  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:44.654688  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:44.654697  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:44.654706  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:44 GMT
	I1025 21:30:44.654718  105113 round_trippers.go:580]     Audit-Id: c1e2353c-682f-4a43-b4c3-4da800f288a7
	I1025 21:30:44.654727  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:44.654744  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:44.654883  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msn2q","generateName":"kube-proxy-","namespace":"kube-system","uid":"75b8f03b-41ea-45cd-9128-daed81df1ecc","resourceVersion":"402","creationTimestamp":"2023-10-25T21:30:11Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"704a299d-d94f-4e3e-a6f8-08ba8cf233bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:30:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"704a299d-d94f-4e3e-a6f8-08ba8cf233bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1025 21:30:44.851593  105113 request.go:629] Waited for 196.276734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:44.851660  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:44.851665  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:44.851672  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:44.851679  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:44.853858  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:44.853875  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:44.853881  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:44 GMT
	I1025 21:30:44.853886  105113 round_trippers.go:580]     Audit-Id: c98a7722-c81c-41b4-b774-4306597336b1
	I1025 21:30:44.853892  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:44.853899  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:44.853907  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:44.853918  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:44.854092  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:30:44.854474  105113 pod_ready.go:92] pod "kube-proxy-msn2q" in "kube-system" namespace has status "Ready":"True"
	I1025 21:30:44.854491  105113 pod_ready.go:81] duration metric: took 399.817478ms waiting for pod "kube-proxy-msn2q" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:44.854504  105113 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:45.051824  105113 request.go:629] Waited for 197.241019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-874778
	I1025 21:30:45.051875  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-874778
	I1025 21:30:45.051880  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:45.051887  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:45.051893  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:45.054191  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:45.054212  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:45.054221  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:45.054229  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:45 GMT
	I1025 21:30:45.054237  105113 round_trippers.go:580]     Audit-Id: 047ab1c2-f3ae-4dd2-9291-75753720f2b8
	I1025 21:30:45.054246  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:45.054259  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:45.054271  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:45.054403  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-874778","namespace":"kube-system","uid":"946650c6-c5ab-4c2a-8904-f989727728c7","resourceVersion":"397","creationTimestamp":"2023-10-25T21:29:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9fc5bdc35c58829ebbeb6e7aac44e301","kubernetes.io/config.mirror":"9fc5bdc35c58829ebbeb6e7aac44e301","kubernetes.io/config.seen":"2023-10-25T21:29:58.930804447Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:29:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1025 21:30:45.252161  105113 request.go:629] Waited for 197.354142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:45.252221  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:30:45.252229  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:45.252240  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:45.252255  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:45.254528  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:45.254552  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:45.254561  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:45.254569  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:45.254579  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:45.254588  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:45.254595  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:45 GMT
	I1025 21:30:45.254609  105113 round_trippers.go:580]     Audit-Id: 9cb2b49c-474d-49f6-b37e-47b1aba5da16
	I1025 21:30:45.254702  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:30:45.255011  105113 pod_ready.go:92] pod "kube-scheduler-multinode-874778" in "kube-system" namespace has status "Ready":"True"
	I1025 21:30:45.255024  105113 pod_ready.go:81] duration metric: took 400.509917ms waiting for pod "kube-scheduler-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:30:45.255034  105113 pod_ready.go:38] duration metric: took 1.600800041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:30:45.255049  105113 api_server.go:52] waiting for apiserver process to appear ...
	I1025 21:30:45.255098  105113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:30:45.265281  105113 command_runner.go:130] > 1431
	I1025 21:30:45.265312  105113 api_server.go:72] duration metric: took 33.213191567s to wait for apiserver process to appear ...
	I1025 21:30:45.265321  105113 api_server.go:88] waiting for apiserver healthz status ...
	I1025 21:30:45.265340  105113 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1025 21:30:45.270058  105113 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1025 21:30:45.270116  105113 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1025 21:30:45.270124  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:45.270132  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:45.270141  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:45.271038  105113 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1025 21:30:45.271053  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:45.271059  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:45.271065  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:45.271070  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:45.271078  105113 round_trippers.go:580]     Content-Length: 264
	I1025 21:30:45.271084  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:45 GMT
	I1025 21:30:45.271091  105113 round_trippers.go:580]     Audit-Id: 82c890b3-4c70-43b0-8474-3de6803bdd8c
	I1025 21:30:45.271097  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:45.271112  105113 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 21:30:45.271183  105113 api_server.go:141] control plane version: v1.28.3
	I1025 21:30:45.271197  105113 api_server.go:131] duration metric: took 5.871421ms to wait for apiserver health ...
	I1025 21:30:45.271203  105113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 21:30:45.452456  105113 request.go:629] Waited for 181.190081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1025 21:30:45.452532  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1025 21:30:45.452544  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:45.452557  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:45.452568  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:45.455781  105113 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 21:30:45.455806  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:45.455815  105113 round_trippers.go:580]     Audit-Id: 1681117a-6466-4037-9477-ca912c803e57
	I1025 21:30:45.455823  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:45.455831  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:45.455840  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:45.455847  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:45.455854  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:45 GMT
	I1025 21:30:45.456360  105113 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knfr2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aaf2ddcd-6832-4476-b04f-12e4fdd933b8","resourceVersion":"442","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"141ac871-bb5e-4c1b-8ac4-12316f895547","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"141ac871-bb5e-4c1b-8ac4-12316f895547\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1025 21:30:45.458017  105113 system_pods.go:59] 8 kube-system pods found
	I1025 21:30:45.458061  105113 system_pods.go:61] "coredns-5dd5756b68-knfr2" [aaf2ddcd-6832-4476-b04f-12e4fdd933b8] Running
	I1025 21:30:45.458072  105113 system_pods.go:61] "etcd-multinode-874778" [732babe1-d90c-4663-bbbc-acbca47036e2] Running
	I1025 21:30:45.458081  105113 system_pods.go:61] "kindnet-2542b" [0664fe89-7c36-4f5c-ad60-0dbb8f47c413] Running
	I1025 21:30:45.458094  105113 system_pods.go:61] "kube-apiserver-multinode-874778" [ef34869e-ca49-4a2c-96c7-7f7e9bc648d2] Running
	I1025 21:30:45.458103  105113 system_pods.go:61] "kube-controller-manager-multinode-874778" [29064f70-ec6c-4d84-ab29-55aa9fdf9013] Running
	I1025 21:30:45.458115  105113 system_pods.go:61] "kube-proxy-msn2q" [75b8f03b-41ea-45cd-9128-daed81df1ecc] Running
	I1025 21:30:45.458123  105113 system_pods.go:61] "kube-scheduler-multinode-874778" [946650c6-c5ab-4c2a-8904-f989727728c7] Running
	I1025 21:30:45.458131  105113 system_pods.go:61] "storage-provisioner" [5e3d74f9-b847-40f1-b4bd-9f5e09f1249e] Running
	I1025 21:30:45.458140  105113 system_pods.go:74] duration metric: took 186.931091ms to wait for pod list to return data ...
	I1025 21:30:45.458154  105113 default_sa.go:34] waiting for default service account to be created ...
	I1025 21:30:45.651515  105113 request.go:629] Waited for 193.28792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1025 21:30:45.651571  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1025 21:30:45.651575  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:45.651582  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:45.651588  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:45.653828  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:45.656329  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:45.656340  105113 round_trippers.go:580]     Content-Length: 261
	I1025 21:30:45.656346  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:45 GMT
	I1025 21:30:45.656351  105113 round_trippers.go:580]     Audit-Id: dc25cf04-6ed3-47bd-aee3-f20acae45e48
	I1025 21:30:45.656357  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:45.656362  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:45.656368  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:45.656382  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:45.656402  105113 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4d90e083-c103-49f0-a4ca-dd96eceaae96","resourceVersion":"330","creationTimestamp":"2023-10-25T21:30:11Z"}}]}
	I1025 21:30:45.656640  105113 default_sa.go:45] found service account: "default"
	I1025 21:30:45.656657  105113 default_sa.go:55] duration metric: took 198.497719ms for default service account to be created ...
	I1025 21:30:45.656665  105113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 21:30:45.852076  105113 request.go:629] Waited for 195.342858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1025 21:30:45.852137  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1025 21:30:45.852142  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:45.852150  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:45.852156  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:45.855297  105113 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 21:30:45.855323  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:45.855333  105113 round_trippers.go:580]     Audit-Id: c939756f-5638-4a34-b85b-ce92d39c697b
	I1025 21:30:45.855342  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:45.855354  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:45.855363  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:45.855373  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:45.855386  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:45 GMT
	I1025 21:30:45.855888  105113 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knfr2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aaf2ddcd-6832-4476-b04f-12e4fdd933b8","resourceVersion":"442","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"141ac871-bb5e-4c1b-8ac4-12316f895547","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"141ac871-bb5e-4c1b-8ac4-12316f895547\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1025 21:30:45.857560  105113 system_pods.go:86] 8 kube-system pods found
	I1025 21:30:45.857577  105113 system_pods.go:89] "coredns-5dd5756b68-knfr2" [aaf2ddcd-6832-4476-b04f-12e4fdd933b8] Running
	I1025 21:30:45.857582  105113 system_pods.go:89] "etcd-multinode-874778" [732babe1-d90c-4663-bbbc-acbca47036e2] Running
	I1025 21:30:45.857586  105113 system_pods.go:89] "kindnet-2542b" [0664fe89-7c36-4f5c-ad60-0dbb8f47c413] Running
	I1025 21:30:45.857593  105113 system_pods.go:89] "kube-apiserver-multinode-874778" [ef34869e-ca49-4a2c-96c7-7f7e9bc648d2] Running
	I1025 21:30:45.857601  105113 system_pods.go:89] "kube-controller-manager-multinode-874778" [29064f70-ec6c-4d84-ab29-55aa9fdf9013] Running
	I1025 21:30:45.857605  105113 system_pods.go:89] "kube-proxy-msn2q" [75b8f03b-41ea-45cd-9128-daed81df1ecc] Running
	I1025 21:30:45.857611  105113 system_pods.go:89] "kube-scheduler-multinode-874778" [946650c6-c5ab-4c2a-8904-f989727728c7] Running
	I1025 21:30:45.857620  105113 system_pods.go:89] "storage-provisioner" [5e3d74f9-b847-40f1-b4bd-9f5e09f1249e] Running
	I1025 21:30:45.857628  105113 system_pods.go:126] duration metric: took 200.956751ms to wait for k8s-apps to be running ...
	I1025 21:30:45.857635  105113 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 21:30:45.857675  105113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:30:45.867780  105113 system_svc.go:56] duration metric: took 10.13868ms WaitForService to wait for kubelet.
	I1025 21:30:45.867804  105113 kubeadm.go:581] duration metric: took 33.815682622s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 21:30:45.867823  105113 node_conditions.go:102] verifying NodePressure condition ...
	I1025 21:30:46.052218  105113 request.go:629] Waited for 184.338431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1025 21:30:46.052277  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1025 21:30:46.052282  105113 round_trippers.go:469] Request Headers:
	I1025 21:30:46.052290  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:30:46.052297  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:30:46.054561  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:30:46.054584  105113 round_trippers.go:577] Response Headers:
	I1025 21:30:46.054594  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:30:46.054602  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:30:46 GMT
	I1025 21:30:46.054610  105113 round_trippers.go:580]     Audit-Id: a8b2e7a3-4baa-40d2-a857-661d5191a32a
	I1025 21:30:46.054618  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:30:46.054629  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:30:46.054638  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:30:46.054740  105113 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1025 21:30:46.055065  105113 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 21:30:46.055080  105113 node_conditions.go:123] node cpu capacity is 8
	I1025 21:30:46.055090  105113 node_conditions.go:105] duration metric: took 187.26321ms to run NodePressure ...
	I1025 21:30:46.055100  105113 start.go:228] waiting for startup goroutines ...
	I1025 21:30:46.055109  105113 start.go:233] waiting for cluster config update ...
	I1025 21:30:46.055141  105113 start.go:242] writing updated cluster config ...
	I1025 21:30:46.057574  105113 out.go:177] 
	I1025 21:30:46.059274  105113 config.go:182] Loaded profile config "multinode-874778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:30:46.059334  105113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/config.json ...
	I1025 21:30:46.061236  105113 out.go:177] * Starting worker node multinode-874778-m02 in cluster multinode-874778
	I1025 21:30:46.062535  105113 cache.go:121] Beginning downloading kic base image for docker with crio
	I1025 21:30:46.064011  105113 out.go:177] * Pulling base image ...
	I1025 21:30:46.065972  105113 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:30:46.065988  105113 cache.go:56] Caching tarball of preloaded images
	I1025 21:30:46.066054  105113 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:30:46.066077  105113 preload.go:174] Found /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 21:30:46.066091  105113 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1025 21:30:46.066191  105113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/config.json ...
	I1025 21:30:46.081544  105113 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 21:30:46.081567  105113 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 21:30:46.081584  105113 cache.go:194] Successfully downloaded all kic artifacts
	I1025 21:30:46.081610  105113 start.go:365] acquiring machines lock for multinode-874778-m02: {Name:mkbb16e1cb6aac49682c380f6be85919924f8ecf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:30:46.081700  105113 start.go:369] acquired machines lock for "multinode-874778-m02" in 75.005µs
	I1025 21:30:46.081721  105113 start.go:93] Provisioning new machine with config: &{Name:multinode-874778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-874778 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1025 21:30:46.081784  105113 start.go:125] createHost starting for "m02" (driver="docker")
	I1025 21:30:46.084811  105113 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 21:30:46.084937  105113 start.go:159] libmachine.API.Create for "multinode-874778" (driver="docker")
	I1025 21:30:46.084963  105113 client.go:168] LocalClient.Create starting
	I1025 21:30:46.085068  105113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem
	I1025 21:30:46.085111  105113 main.go:141] libmachine: Decoding PEM data...
	I1025 21:30:46.085183  105113 main.go:141] libmachine: Parsing certificate...
	I1025 21:30:46.085263  105113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem
	I1025 21:30:46.085297  105113 main.go:141] libmachine: Decoding PEM data...
	I1025 21:30:46.085312  105113 main.go:141] libmachine: Parsing certificate...
	I1025 21:30:46.085561  105113 cli_runner.go:164] Run: docker network inspect multinode-874778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:30:46.101322  105113 network_create.go:77] Found existing network {name:multinode-874778 subnet:0xc0034c7950 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1025 21:30:46.101355  105113 kic.go:118] calculated static IP "192.168.58.3" for the "multinode-874778-m02" container
	I1025 21:30:46.101403  105113 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:30:46.116801  105113 cli_runner.go:164] Run: docker volume create multinode-874778-m02 --label name.minikube.sigs.k8s.io=multinode-874778-m02 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:30:46.132987  105113 oci.go:103] Successfully created a docker volume multinode-874778-m02
	I1025 21:30:46.133067  105113 cli_runner.go:164] Run: docker run --rm --name multinode-874778-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-874778-m02 --entrypoint /usr/bin/test -v multinode-874778-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 21:30:46.599405  105113 oci.go:107] Successfully prepared a docker volume multinode-874778-m02
	I1025 21:30:46.599448  105113 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:30:46.599477  105113 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 21:30:46.599544  105113 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-874778-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 21:30:51.726724  105113 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-874778-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.127132889s)
	I1025 21:30:51.726760  105113 kic.go:200] duration metric: took 5.127279 seconds to extract preloaded images to volume
	W1025 21:30:51.726890  105113 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 21:30:51.726975  105113 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 21:30:51.777957  105113 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-874778-m02 --name multinode-874778-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-874778-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-874778-m02 --network multinode-874778 --ip 192.168.58.3 --volume multinode-874778-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 21:30:52.072068  105113 cli_runner.go:164] Run: docker container inspect multinode-874778-m02 --format={{.State.Running}}
	I1025 21:30:52.090356  105113 cli_runner.go:164] Run: docker container inspect multinode-874778-m02 --format={{.State.Status}}
	I1025 21:30:52.108069  105113 cli_runner.go:164] Run: docker exec multinode-874778-m02 stat /var/lib/dpkg/alternatives/iptables
	I1025 21:30:52.147144  105113 oci.go:144] the created container "multinode-874778-m02" has a running status.
	I1025 21:30:52.147172  105113 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778-m02/id_rsa...
	I1025 21:30:52.244475  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 21:30:52.244520  105113 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 21:30:52.264558  105113 cli_runner.go:164] Run: docker container inspect multinode-874778-m02 --format={{.State.Status}}
	I1025 21:30:52.279978  105113 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 21:30:52.279997  105113 kic_runner.go:114] Args: [docker exec --privileged multinode-874778-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 21:30:52.347159  105113 cli_runner.go:164] Run: docker container inspect multinode-874778-m02 --format={{.State.Status}}
	I1025 21:30:52.365819  105113 machine.go:88] provisioning docker machine ...
	I1025 21:30:52.365854  105113 ubuntu.go:169] provisioning hostname "multinode-874778-m02"
	I1025 21:30:52.365915  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778-m02
	I1025 21:30:52.381825  105113 main.go:141] libmachine: Using SSH client type: native
	I1025 21:30:52.382159  105113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1025 21:30:52.382173  105113 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-874778-m02 && echo "multinode-874778-m02" | sudo tee /etc/hostname
	I1025 21:30:52.382808  105113 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35286->127.0.0.1:32852: read: connection reset by peer
	I1025 21:30:55.512542  105113 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-874778-m02
	
	I1025 21:30:55.512624  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778-m02
	I1025 21:30:55.528511  105113 main.go:141] libmachine: Using SSH client type: native
	I1025 21:30:55.528872  105113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1025 21:30:55.528915  105113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-874778-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-874778-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-874778-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:30:55.650049  105113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:30:55.650076  105113 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17488-11542/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-11542/.minikube}
	I1025 21:30:55.650095  105113 ubuntu.go:177] setting up certificates
	I1025 21:30:55.650103  105113 provision.go:83] configureAuth start
	I1025 21:30:55.650156  105113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-874778-m02
	I1025 21:30:55.667006  105113 provision.go:138] copyHostCerts
	I1025 21:30:55.667043  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem
	I1025 21:30:55.667077  105113 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem, removing ...
	I1025 21:30:55.667087  105113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem
	I1025 21:30:55.667161  105113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem (1078 bytes)
	I1025 21:30:55.667254  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem
	I1025 21:30:55.667279  105113 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem, removing ...
	I1025 21:30:55.667285  105113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem
	I1025 21:30:55.667325  105113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem (1123 bytes)
	I1025 21:30:55.667386  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem
	I1025 21:30:55.667412  105113 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem, removing ...
	I1025 21:30:55.667418  105113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem
	I1025 21:30:55.667450  105113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem (1675 bytes)
	I1025 21:30:55.667511  105113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem org=jenkins.multinode-874778-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-874778-m02]
	I1025 21:30:55.764607  105113 provision.go:172] copyRemoteCerts
	I1025 21:30:55.764676  105113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:30:55.764722  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778-m02
	I1025 21:30:55.780865  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778-m02/id_rsa Username:docker}
	I1025 21:30:55.874584  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 21:30:55.874664  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 21:30:55.895901  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 21:30:55.895965  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1025 21:30:55.916201  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 21:30:55.916260  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 21:30:55.936441  105113 provision.go:86] duration metric: configureAuth took 286.326305ms
	I1025 21:30:55.936467  105113 ubuntu.go:193] setting minikube options for container-runtime
	I1025 21:30:55.936661  105113 config.go:182] Loaded profile config "multinode-874778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:30:55.936762  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778-m02
	I1025 21:30:55.953604  105113 main.go:141] libmachine: Using SSH client type: native
	I1025 21:30:55.953950  105113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1025 21:30:55.953969  105113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 21:30:56.153714  105113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 21:30:56.153748  105113 machine.go:91] provisioned docker machine in 3.787907762s
	I1025 21:30:56.153760  105113 client.go:171] LocalClient.Create took 10.068789833s
	I1025 21:30:56.153777  105113 start.go:167] duration metric: libmachine.API.Create for "multinode-874778" took 10.068842038s
	I1025 21:30:56.153786  105113 start.go:300] post-start starting for "multinode-874778-m02" (driver="docker")
	I1025 21:30:56.153798  105113 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:30:56.153870  105113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:30:56.153915  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778-m02
	I1025 21:30:56.170430  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778-m02/id_rsa Username:docker}
	I1025 21:30:56.258614  105113 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:30:56.261483  105113 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1025 21:30:56.261499  105113 command_runner.go:130] > NAME="Ubuntu"
	I1025 21:30:56.261508  105113 command_runner.go:130] > VERSION_ID="22.04"
	I1025 21:30:56.261517  105113 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1025 21:30:56.261525  105113 command_runner.go:130] > VERSION_CODENAME=jammy
	I1025 21:30:56.261532  105113 command_runner.go:130] > ID=ubuntu
	I1025 21:30:56.261538  105113 command_runner.go:130] > ID_LIKE=debian
	I1025 21:30:56.261546  105113 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 21:30:56.261554  105113 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 21:30:56.261573  105113 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 21:30:56.261587  105113 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 21:30:56.261594  105113 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1025 21:30:56.261684  105113 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 21:30:56.261708  105113 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 21:30:56.261717  105113 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 21:30:56.261723  105113 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 21:30:56.261735  105113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/addons for local assets ...
	I1025 21:30:56.261779  105113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/files for local assets ...
	I1025 21:30:56.261842  105113 filesync.go:149] local asset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> 183232.pem in /etc/ssl/certs
	I1025 21:30:56.261851  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> /etc/ssl/certs/183232.pem
	I1025 21:30:56.261925  105113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 21:30:56.269377  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem --> /etc/ssl/certs/183232.pem (1708 bytes)
	I1025 21:30:56.289791  105113 start.go:303] post-start completed in 135.99357ms
	I1025 21:30:56.290119  105113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-874778-m02
	I1025 21:30:56.306001  105113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/config.json ...
	I1025 21:30:56.306265  105113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:30:56.306337  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778-m02
	I1025 21:30:56.321629  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778-m02/id_rsa Username:docker}
	I1025 21:30:56.406496  105113 command_runner.go:130] > 24%!
	(MISSING)I1025 21:30:56.406740  105113 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:30:56.410552  105113 command_runner.go:130] > 224G
	I1025 21:30:56.410583  105113 start.go:128] duration metric: createHost completed in 10.328789089s
	I1025 21:30:56.410596  105113 start.go:83] releasing machines lock for "multinode-874778-m02", held for 10.328884927s
	I1025 21:30:56.410661  105113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-874778-m02
	I1025 21:30:56.428420  105113 out.go:177] * Found network options:
	I1025 21:30:56.430166  105113 out.go:177]   - NO_PROXY=192.168.58.2
	W1025 21:30:56.431664  105113 proxy.go:119] fail to check proxy env: Error ip not in block
	W1025 21:30:56.431718  105113 proxy.go:119] fail to check proxy env: Error ip not in block
	I1025 21:30:56.431791  105113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 21:30:56.431835  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778-m02
	I1025 21:30:56.431855  105113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:30:56.431912  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778-m02
	I1025 21:30:56.447674  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778-m02/id_rsa Username:docker}
	I1025 21:30:56.449086  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778-m02/id_rsa Username:docker}
	I1025 21:30:56.619257  105113 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 21:30:56.663308  105113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 21:30:56.667562  105113 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1025 21:30:56.667580  105113 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1025 21:30:56.667587  105113 command_runner.go:130] > Device: b0h/176d	Inode: 552112      Links: 1
	I1025 21:30:56.667593  105113 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 21:30:56.667599  105113 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1025 21:30:56.667603  105113 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1025 21:30:56.667608  105113 command_runner.go:130] > Change: 2023-10-25 21:11:12.050317769 +0000
	I1025 21:30:56.667615  105113 command_runner.go:130] >  Birth: 2023-10-25 21:11:12.050317769 +0000
	I1025 21:30:56.667669  105113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:30:56.685386  105113 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1025 21:30:56.685473  105113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:30:56.712216  105113 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1025 21:30:56.712272  105113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 21:30:56.712280  105113 start.go:472] detecting cgroup driver to use...
	I1025 21:30:56.712308  105113 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 21:30:56.712364  105113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 21:30:56.726025  105113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:30:56.736208  105113 docker.go:198] disabling cri-docker service (if available) ...
	I1025 21:30:56.736263  105113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:30:56.748005  105113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:30:56.759889  105113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 21:30:56.842083  105113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:30:56.854912  105113 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1025 21:30:56.923029  105113 docker.go:214] disabling docker service ...
	I1025 21:30:56.923115  105113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:30:56.939698  105113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:30:56.950584  105113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:30:57.030429  105113 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1025 21:30:57.030517  105113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:30:57.107025  105113 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1025 21:30:57.107083  105113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:30:57.116795  105113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:30:57.130000  105113 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1025 21:30:57.130800  105113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 21:30:57.130860  105113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:30:57.138962  105113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 21:30:57.139021  105113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:30:57.147381  105113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:30:57.155432  105113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:30:57.163571  105113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 21:30:57.171163  105113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 21:30:57.178381  105113 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1025 21:30:57.178444  105113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 21:30:57.185749  105113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:30:57.257884  105113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 21:30:57.368457  105113 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 21:30:57.368523  105113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 21:30:57.371842  105113 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1025 21:30:57.371871  105113 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 21:30:57.371881  105113 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1025 21:30:57.371895  105113 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 21:30:57.371904  105113 command_runner.go:130] > Access: 2023-10-25 21:30:57.355063901 +0000
	I1025 21:30:57.371917  105113 command_runner.go:130] > Modify: 2023-10-25 21:30:57.355063901 +0000
	I1025 21:30:57.371930  105113 command_runner.go:130] > Change: 2023-10-25 21:30:57.355063901 +0000
	I1025 21:30:57.371940  105113 command_runner.go:130] >  Birth: -
	I1025 21:30:57.371969  105113 start.go:540] Will wait 60s for crictl version
	I1025 21:30:57.372011  105113 ssh_runner.go:195] Run: which crictl
	I1025 21:30:57.374872  105113 command_runner.go:130] > /usr/bin/crictl
	I1025 21:30:57.374932  105113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 21:30:57.405782  105113 command_runner.go:130] > Version:  0.1.0
	I1025 21:30:57.405804  105113 command_runner.go:130] > RuntimeName:  cri-o
	I1025 21:30:57.405811  105113 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1025 21:30:57.405820  105113 command_runner.go:130] > RuntimeApiVersion:  v1
	I1025 21:30:57.405841  105113 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1025 21:30:57.405907  105113 ssh_runner.go:195] Run: crio --version
	I1025 21:30:57.435503  105113 command_runner.go:130] > crio version 1.24.6
	I1025 21:30:57.435522  105113 command_runner.go:130] > Version:          1.24.6
	I1025 21:30:57.435541  105113 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1025 21:30:57.435546  105113 command_runner.go:130] > GitTreeState:     clean
	I1025 21:30:57.435552  105113 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1025 21:30:57.435557  105113 command_runner.go:130] > GoVersion:        go1.18.2
	I1025 21:30:57.435561  105113 command_runner.go:130] > Compiler:         gc
	I1025 21:30:57.435566  105113 command_runner.go:130] > Platform:         linux/amd64
	I1025 21:30:57.435571  105113 command_runner.go:130] > Linkmode:         dynamic
	I1025 21:30:57.435578  105113 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1025 21:30:57.435583  105113 command_runner.go:130] > SeccompEnabled:   true
	I1025 21:30:57.435587  105113 command_runner.go:130] > AppArmorEnabled:  false
	I1025 21:30:57.436888  105113 ssh_runner.go:195] Run: crio --version
	I1025 21:30:57.467967  105113 command_runner.go:130] > crio version 1.24.6
	I1025 21:30:57.467993  105113 command_runner.go:130] > Version:          1.24.6
	I1025 21:30:57.468000  105113 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1025 21:30:57.468005  105113 command_runner.go:130] > GitTreeState:     clean
	I1025 21:30:57.468011  105113 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1025 21:30:57.468015  105113 command_runner.go:130] > GoVersion:        go1.18.2
	I1025 21:30:57.468019  105113 command_runner.go:130] > Compiler:         gc
	I1025 21:30:57.468031  105113 command_runner.go:130] > Platform:         linux/amd64
	I1025 21:30:57.468036  105113 command_runner.go:130] > Linkmode:         dynamic
	I1025 21:30:57.468048  105113 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1025 21:30:57.468058  105113 command_runner.go:130] > SeccompEnabled:   true
	I1025 21:30:57.468076  105113 command_runner.go:130] > AppArmorEnabled:  false
	I1025 21:30:57.471384  105113 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1025 21:30:57.473184  105113 out.go:177]   - env NO_PROXY=192.168.58.2
	I1025 21:30:57.474720  105113 cli_runner.go:164] Run: docker network inspect multinode-874778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:30:57.490977  105113 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1025 21:30:57.494233  105113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:30:57.503964  105113 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778 for IP: 192.168.58.3
	I1025 21:30:57.504002  105113 certs.go:190] acquiring lock for shared ca certs: {Name:mk35413dbabac2652d1fa66d4e17d237360108a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:30:57.504149  105113 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key
	I1025 21:30:57.504207  105113 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key
	I1025 21:30:57.504226  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 21:30:57.504252  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 21:30:57.504273  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 21:30:57.504294  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 21:30:57.504368  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323.pem (1338 bytes)
	W1025 21:30:57.504420  105113 certs.go:433] ignoring /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323_empty.pem, impossibly tiny 0 bytes
	I1025 21:30:57.504437  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 21:30:57.504481  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem (1078 bytes)
	I1025 21:30:57.504532  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem (1123 bytes)
	I1025 21:30:57.504572  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem (1675 bytes)
	I1025 21:30:57.504638  105113 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem (1708 bytes)
	I1025 21:30:57.504685  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:30:57.504709  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323.pem -> /usr/share/ca-certificates/18323.pem
	I1025 21:30:57.504730  105113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> /usr/share/ca-certificates/183232.pem
	I1025 21:30:57.505093  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 21:30:57.525463  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 21:30:57.545380  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 21:30:57.567080  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 21:30:57.587721  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 21:30:57.607101  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/certs/18323.pem --> /usr/share/ca-certificates/18323.pem (1338 bytes)
	I1025 21:30:57.627456  105113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem --> /usr/share/ca-certificates/183232.pem (1708 bytes)
	I1025 21:30:57.647771  105113 ssh_runner.go:195] Run: openssl version
	I1025 21:30:57.652582  105113 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1025 21:30:57.652652  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 21:30:57.660615  105113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:30:57.663846  105113 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 25 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:30:57.663896  105113 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:30:57.663938  105113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:30:57.669899  105113 command_runner.go:130] > b5213941
	I1025 21:30:57.669944  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 21:30:57.677666  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18323.pem && ln -fs /usr/share/ca-certificates/18323.pem /etc/ssl/certs/18323.pem"
	I1025 21:30:57.685751  105113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18323.pem
	I1025 21:30:57.688814  105113 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 25 21:17 /usr/share/ca-certificates/18323.pem
	I1025 21:30:57.688863  105113 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:17 /usr/share/ca-certificates/18323.pem
	I1025 21:30:57.688904  105113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18323.pem
	I1025 21:30:57.694982  105113 command_runner.go:130] > 51391683
	I1025 21:30:57.695031  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18323.pem /etc/ssl/certs/51391683.0"
	I1025 21:30:57.702993  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183232.pem && ln -fs /usr/share/ca-certificates/183232.pem /etc/ssl/certs/183232.pem"
	I1025 21:30:57.710888  105113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183232.pem
	I1025 21:30:57.713774  105113 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 25 21:17 /usr/share/ca-certificates/183232.pem
	I1025 21:30:57.713806  105113 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:17 /usr/share/ca-certificates/183232.pem
	I1025 21:30:57.713844  105113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183232.pem
	I1025 21:30:57.719834  105113 command_runner.go:130] > 3ec20f2e
	I1025 21:30:57.719877  105113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183232.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 21:30:57.727698  105113 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 21:30:57.730572  105113 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 21:30:57.730617  105113 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 21:30:57.730684  105113 ssh_runner.go:195] Run: crio config
	I1025 21:30:57.765187  105113 command_runner.go:130] ! time="2023-10-25 21:30:57.764786092Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1025 21:30:57.765222  105113 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1025 21:30:57.770196  105113 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1025 21:30:57.770228  105113 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1025 21:30:57.770239  105113 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1025 21:30:57.770246  105113 command_runner.go:130] > #
	I1025 21:30:57.770258  105113 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1025 21:30:57.770299  105113 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1025 21:30:57.770309  105113 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1025 21:30:57.770319  105113 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1025 21:30:57.770323  105113 command_runner.go:130] > # reload'.
	I1025 21:30:57.770330  105113 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1025 21:30:57.770336  105113 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1025 21:30:57.770342  105113 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1025 21:30:57.770351  105113 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1025 21:30:57.770357  105113 command_runner.go:130] > [crio]
	I1025 21:30:57.770366  105113 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1025 21:30:57.770374  105113 command_runner.go:130] > # containers images, in this directory.
	I1025 21:30:57.770382  105113 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1025 21:30:57.770391  105113 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1025 21:30:57.770399  105113 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1025 21:30:57.770408  105113 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1025 21:30:57.770416  105113 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1025 21:30:57.770423  105113 command_runner.go:130] > # storage_driver = "vfs"
	I1025 21:30:57.770434  105113 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1025 21:30:57.770442  105113 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1025 21:30:57.770447  105113 command_runner.go:130] > # storage_option = [
	I1025 21:30:57.770453  105113 command_runner.go:130] > # ]
	I1025 21:30:57.770459  105113 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1025 21:30:57.770467  105113 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1025 21:30:57.770474  105113 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1025 21:30:57.770480  105113 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1025 21:30:57.770486  105113 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1025 21:30:57.770493  105113 command_runner.go:130] > # always happen on a node reboot
	I1025 21:30:57.770498  105113 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1025 21:30:57.770506  105113 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1025 21:30:57.770514  105113 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1025 21:30:57.770527  105113 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1025 21:30:57.770535  105113 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1025 21:30:57.770553  105113 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1025 21:30:57.770563  105113 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1025 21:30:57.770569  105113 command_runner.go:130] > # internal_wipe = true
	I1025 21:30:57.770575  105113 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1025 21:30:57.770584  105113 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1025 21:30:57.770591  105113 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1025 21:30:57.770599  105113 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1025 21:30:57.770605  105113 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1025 21:30:57.770611  105113 command_runner.go:130] > [crio.api]
	I1025 21:30:57.770617  105113 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1025 21:30:57.770624  105113 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1025 21:30:57.770629  105113 command_runner.go:130] > # IP address on which the stream server will listen.
	I1025 21:30:57.770636  105113 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1025 21:30:57.770643  105113 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1025 21:30:57.770650  105113 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1025 21:30:57.770657  105113 command_runner.go:130] > # stream_port = "0"
	I1025 21:30:57.770662  105113 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1025 21:30:57.770669  105113 command_runner.go:130] > # stream_enable_tls = false
	I1025 21:30:57.770676  105113 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1025 21:30:57.770683  105113 command_runner.go:130] > # stream_idle_timeout = ""
	I1025 21:30:57.770689  105113 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1025 21:30:57.770697  105113 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1025 21:30:57.770703  105113 command_runner.go:130] > # minutes.
	I1025 21:30:57.770707  105113 command_runner.go:130] > # stream_tls_cert = ""
	I1025 21:30:57.770715  105113 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1025 21:30:57.770721  105113 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1025 21:30:57.770728  105113 command_runner.go:130] > # stream_tls_key = ""
	I1025 21:30:57.770734  105113 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1025 21:30:57.770742  105113 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1025 21:30:57.770749  105113 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1025 21:30:57.770755  105113 command_runner.go:130] > # stream_tls_ca = ""
	I1025 21:30:57.770763  105113 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1025 21:30:57.770770  105113 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1025 21:30:57.770777  105113 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1025 21:30:57.770784  105113 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1025 21:30:57.770806  105113 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1025 21:30:57.770815  105113 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1025 21:30:57.770819  105113 command_runner.go:130] > [crio.runtime]
	I1025 21:30:57.770828  105113 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1025 21:30:57.770835  105113 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1025 21:30:57.770840  105113 command_runner.go:130] > # "nofile=1024:2048"
	I1025 21:30:57.770846  105113 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1025 21:30:57.770853  105113 command_runner.go:130] > # default_ulimits = [
	I1025 21:30:57.770856  105113 command_runner.go:130] > # ]
	I1025 21:30:57.770864  105113 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1025 21:30:57.770871  105113 command_runner.go:130] > # no_pivot = false
	I1025 21:30:57.770876  105113 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1025 21:30:57.770884  105113 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1025 21:30:57.770891  105113 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1025 21:30:57.770897  105113 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1025 21:30:57.770904  105113 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1025 21:30:57.770911  105113 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1025 21:30:57.770917  105113 command_runner.go:130] > # conmon = ""
	I1025 21:30:57.770921  105113 command_runner.go:130] > # Cgroup setting for conmon
	I1025 21:30:57.770931  105113 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1025 21:30:57.770939  105113 command_runner.go:130] > conmon_cgroup = "pod"
	I1025 21:30:57.770951  105113 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1025 21:30:57.770958  105113 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1025 21:30:57.770967  105113 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1025 21:30:57.770973  105113 command_runner.go:130] > # conmon_env = [
	I1025 21:30:57.770976  105113 command_runner.go:130] > # ]
	I1025 21:30:57.770985  105113 command_runner.go:130] > # Additional environment variables to set for all the
	I1025 21:30:57.770992  105113 command_runner.go:130] > # containers. These are overridden if set in the
	I1025 21:30:57.770998  105113 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1025 21:30:57.771004  105113 command_runner.go:130] > # default_env = [
	I1025 21:30:57.771007  105113 command_runner.go:130] > # ]
	I1025 21:30:57.771015  105113 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1025 21:30:57.771019  105113 command_runner.go:130] > # selinux = false
	I1025 21:30:57.771028  105113 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1025 21:30:57.771036  105113 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1025 21:30:57.771044  105113 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1025 21:30:57.771051  105113 command_runner.go:130] > # seccomp_profile = ""
	I1025 21:30:57.771057  105113 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1025 21:30:57.771065  105113 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1025 21:30:57.771073  105113 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1025 21:30:57.771079  105113 command_runner.go:130] > # which might increase security.
	I1025 21:30:57.771084  105113 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1025 21:30:57.771090  105113 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1025 21:30:57.771099  105113 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1025 21:30:57.771107  105113 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1025 21:30:57.771115  105113 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1025 21:30:57.771123  105113 command_runner.go:130] > # This option supports live configuration reload.
	I1025 21:30:57.771127  105113 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1025 21:30:57.771135  105113 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1025 21:30:57.771142  105113 command_runner.go:130] > # the cgroup blockio controller.
	I1025 21:30:57.771149  105113 command_runner.go:130] > # blockio_config_file = ""
	I1025 21:30:57.771155  105113 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1025 21:30:57.771162  105113 command_runner.go:130] > # irqbalance daemon.
	I1025 21:30:57.771167  105113 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1025 21:30:57.771175  105113 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1025 21:30:57.771180  105113 command_runner.go:130] > # This option supports live configuration reload.
	I1025 21:30:57.771186  105113 command_runner.go:130] > # rdt_config_file = ""
	I1025 21:30:57.771192  105113 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1025 21:30:57.771198  105113 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1025 21:30:57.771204  105113 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1025 21:30:57.771210  105113 command_runner.go:130] > # separate_pull_cgroup = ""
	I1025 21:30:57.771217  105113 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1025 21:30:57.771225  105113 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1025 21:30:57.771231  105113 command_runner.go:130] > # will be added.
	I1025 21:30:57.771236  105113 command_runner.go:130] > # default_capabilities = [
	I1025 21:30:57.771242  105113 command_runner.go:130] > # 	"CHOWN",
	I1025 21:30:57.771246  105113 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1025 21:30:57.771252  105113 command_runner.go:130] > # 	"FSETID",
	I1025 21:30:57.771256  105113 command_runner.go:130] > # 	"FOWNER",
	I1025 21:30:57.771262  105113 command_runner.go:130] > # 	"SETGID",
	I1025 21:30:57.771266  105113 command_runner.go:130] > # 	"SETUID",
	I1025 21:30:57.771270  105113 command_runner.go:130] > # 	"SETPCAP",
	I1025 21:30:57.771276  105113 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1025 21:30:57.771281  105113 command_runner.go:130] > # 	"KILL",
	I1025 21:30:57.771286  105113 command_runner.go:130] > # ]
	I1025 21:30:57.771294  105113 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1025 21:30:57.771302  105113 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1025 21:30:57.771309  105113 command_runner.go:130] > # add_inheritable_capabilities = true
	I1025 21:30:57.771315  105113 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1025 21:30:57.771324  105113 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1025 21:30:57.771331  105113 command_runner.go:130] > # default_sysctls = [
	I1025 21:30:57.771334  105113 command_runner.go:130] > # ]
	I1025 21:30:57.771341  105113 command_runner.go:130] > # List of devices on the host that a
	I1025 21:30:57.771347  105113 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1025 21:30:57.771354  105113 command_runner.go:130] > # allowed_devices = [
	I1025 21:30:57.771358  105113 command_runner.go:130] > # 	"/dev/fuse",
	I1025 21:30:57.771361  105113 command_runner.go:130] > # ]
	I1025 21:30:57.771366  105113 command_runner.go:130] > # List of additional devices. specified as
	I1025 21:30:57.771388  105113 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1025 21:30:57.771396  105113 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1025 21:30:57.771401  105113 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1025 21:30:57.771408  105113 command_runner.go:130] > # additional_devices = [
	I1025 21:30:57.771411  105113 command_runner.go:130] > # ]
	I1025 21:30:57.771419  105113 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1025 21:30:57.771425  105113 command_runner.go:130] > # cdi_spec_dirs = [
	I1025 21:30:57.771435  105113 command_runner.go:130] > # 	"/etc/cdi",
	I1025 21:30:57.771441  105113 command_runner.go:130] > # 	"/var/run/cdi",
	I1025 21:30:57.771444  105113 command_runner.go:130] > # ]
	I1025 21:30:57.771453  105113 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1025 21:30:57.771461  105113 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1025 21:30:57.771468  105113 command_runner.go:130] > # Defaults to false.
	I1025 21:30:57.771473  105113 command_runner.go:130] > # device_ownership_from_security_context = false
	I1025 21:30:57.771479  105113 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1025 21:30:57.771493  105113 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1025 21:30:57.771500  105113 command_runner.go:130] > # hooks_dir = [
	I1025 21:30:57.771505  105113 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1025 21:30:57.771511  105113 command_runner.go:130] > # ]
	I1025 21:30:57.771517  105113 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1025 21:30:57.771526  105113 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1025 21:30:57.771532  105113 command_runner.go:130] > # its default mounts from the following two files:
	I1025 21:30:57.771540  105113 command_runner.go:130] > #
	I1025 21:30:57.771548  105113 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1025 21:30:57.771557  105113 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1025 21:30:57.771565  105113 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1025 21:30:57.771571  105113 command_runner.go:130] > #
	I1025 21:30:57.771577  105113 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1025 21:30:57.771586  105113 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1025 21:30:57.771595  105113 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1025 21:30:57.771602  105113 command_runner.go:130] > #      only add mounts it finds in this file.
	I1025 21:30:57.771605  105113 command_runner.go:130] > #
	I1025 21:30:57.771610  105113 command_runner.go:130] > # default_mounts_file = ""
	I1025 21:30:57.771616  105113 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1025 21:30:57.771624  105113 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1025 21:30:57.771630  105113 command_runner.go:130] > # pids_limit = 0
	I1025 21:30:57.771636  105113 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1025 21:30:57.771645  105113 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1025 21:30:57.771653  105113 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1025 21:30:57.771664  105113 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1025 21:30:57.771670  105113 command_runner.go:130] > # log_size_max = -1
	I1025 21:30:57.771677  105113 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1025 21:30:57.771684  105113 command_runner.go:130] > # log_to_journald = false
	I1025 21:30:57.771690  105113 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1025 21:30:57.771697  105113 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1025 21:30:57.771702  105113 command_runner.go:130] > # Path to directory for container attach sockets.
	I1025 21:30:57.771709  105113 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1025 21:30:57.771714  105113 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1025 21:30:57.771721  105113 command_runner.go:130] > # bind_mount_prefix = ""
	I1025 21:30:57.771726  105113 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1025 21:30:57.771732  105113 command_runner.go:130] > # read_only = false
	I1025 21:30:57.771738  105113 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1025 21:30:57.771747  105113 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1025 21:30:57.771753  105113 command_runner.go:130] > # live configuration reload.
	I1025 21:30:57.771757  105113 command_runner.go:130] > # log_level = "info"
	I1025 21:30:57.771765  105113 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1025 21:30:57.771770  105113 command_runner.go:130] > # This option supports live configuration reload.
	I1025 21:30:57.771777  105113 command_runner.go:130] > # log_filter = ""
	I1025 21:30:57.771782  105113 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1025 21:30:57.771791  105113 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1025 21:30:57.771795  105113 command_runner.go:130] > # separated by comma.
	I1025 21:30:57.771801  105113 command_runner.go:130] > # uid_mappings = ""
	I1025 21:30:57.771807  105113 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1025 21:30:57.771815  105113 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1025 21:30:57.771821  105113 command_runner.go:130] > # separated by comma.
	I1025 21:30:57.771825  105113 command_runner.go:130] > # gid_mappings = ""
	I1025 21:30:57.771833  105113 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1025 21:30:57.771841  105113 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1025 21:30:57.771850  105113 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1025 21:30:57.771857  105113 command_runner.go:130] > # minimum_mappable_uid = -1
	I1025 21:30:57.771863  105113 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1025 21:30:57.771871  105113 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1025 21:30:57.771879  105113 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1025 21:30:57.771883  105113 command_runner.go:130] > # minimum_mappable_gid = -1
	I1025 21:30:57.771891  105113 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1025 21:30:57.771899  105113 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1025 21:30:57.771907  105113 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1025 21:30:57.771913  105113 command_runner.go:130] > # ctr_stop_timeout = 30
	I1025 21:30:57.771919  105113 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1025 21:30:57.771929  105113 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1025 21:30:57.771938  105113 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1025 21:30:57.771945  105113 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1025 21:30:57.771951  105113 command_runner.go:130] > # drop_infra_ctr = true
	I1025 21:30:57.771957  105113 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1025 21:30:57.771965  105113 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1025 21:30:57.771974  105113 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1025 21:30:57.771981  105113 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1025 21:30:57.771987  105113 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1025 21:30:57.771994  105113 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1025 21:30:57.772000  105113 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1025 21:30:57.772007  105113 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1025 21:30:57.772014  105113 command_runner.go:130] > # pinns_path = ""
	I1025 21:30:57.772020  105113 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1025 21:30:57.772030  105113 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1025 21:30:57.772038  105113 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1025 21:30:57.772044  105113 command_runner.go:130] > # default_runtime = "runc"
	I1025 21:30:57.772049  105113 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1025 21:30:57.772059  105113 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1025 21:30:57.772070  105113 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1025 21:30:57.772077  105113 command_runner.go:130] > # creation as a file is not desired either.
	I1025 21:30:57.772087  105113 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1025 21:30:57.772094  105113 command_runner.go:130] > # the hostname is being managed dynamically.
	I1025 21:30:57.772099  105113 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1025 21:30:57.772104  105113 command_runner.go:130] > # ]
	I1025 21:30:57.772110  105113 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1025 21:30:57.772120  105113 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1025 21:30:57.772129  105113 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1025 21:30:57.772137  105113 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1025 21:30:57.772141  105113 command_runner.go:130] > #
	I1025 21:30:57.772146  105113 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1025 21:30:57.772152  105113 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1025 21:30:57.772158  105113 command_runner.go:130] > #  runtime_type = "oci"
	I1025 21:30:57.772163  105113 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1025 21:30:57.772170  105113 command_runner.go:130] > #  privileged_without_host_devices = false
	I1025 21:30:57.772175  105113 command_runner.go:130] > #  allowed_annotations = []
	I1025 21:30:57.772180  105113 command_runner.go:130] > # Where:
	I1025 21:30:57.772185  105113 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1025 21:30:57.772194  105113 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1025 21:30:57.772203  105113 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1025 21:30:57.772212  105113 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1025 21:30:57.772218  105113 command_runner.go:130] > #   in $PATH.
	I1025 21:30:57.772224  105113 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1025 21:30:57.772232  105113 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1025 21:30:57.772238  105113 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1025 21:30:57.772244  105113 command_runner.go:130] > #   state.
	I1025 21:30:57.772250  105113 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1025 21:30:57.772258  105113 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1025 21:30:57.772267  105113 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1025 21:30:57.772274  105113 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1025 21:30:57.772283  105113 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1025 21:30:57.772292  105113 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1025 21:30:57.772299  105113 command_runner.go:130] > #   The currently recognized values are:
	I1025 21:30:57.772305  105113 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1025 21:30:57.772314  105113 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1025 21:30:57.772322  105113 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1025 21:30:57.772328  105113 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1025 21:30:57.772338  105113 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1025 21:30:57.772346  105113 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1025 21:30:57.772354  105113 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1025 21:30:57.772364  105113 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1025 21:30:57.772371  105113 command_runner.go:130] > #   should be moved to the container's cgroup
	I1025 21:30:57.772377  105113 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1025 21:30:57.772384  105113 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1025 21:30:57.772388  105113 command_runner.go:130] > runtime_type = "oci"
	I1025 21:30:57.772393  105113 command_runner.go:130] > runtime_root = "/run/runc"
	I1025 21:30:57.772399  105113 command_runner.go:130] > runtime_config_path = ""
	I1025 21:30:57.772404  105113 command_runner.go:130] > monitor_path = ""
	I1025 21:30:57.772410  105113 command_runner.go:130] > monitor_cgroup = ""
	I1025 21:30:57.772414  105113 command_runner.go:130] > monitor_exec_cgroup = ""
	I1025 21:30:57.772446  105113 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1025 21:30:57.772452  105113 command_runner.go:130] > # running containers
	I1025 21:30:57.772457  105113 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1025 21:30:57.772463  105113 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1025 21:30:57.772472  105113 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1025 21:30:57.772477  105113 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1025 21:30:57.772485  105113 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1025 21:30:57.772489  105113 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1025 21:30:57.772496  105113 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1025 21:30:57.772501  105113 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1025 21:30:57.772508  105113 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1025 21:30:57.772513  105113 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1025 21:30:57.772520  105113 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1025 21:30:57.772528  105113 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1025 21:30:57.772537  105113 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1025 21:30:57.772546  105113 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1025 21:30:57.772557  105113 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1025 21:30:57.772565  105113 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1025 21:30:57.772576  105113 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1025 21:30:57.772586  105113 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1025 21:30:57.772592  105113 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1025 21:30:57.772602  105113 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1025 21:30:57.772608  105113 command_runner.go:130] > # Example:
	I1025 21:30:57.772613  105113 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1025 21:30:57.772620  105113 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1025 21:30:57.772625  105113 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1025 21:30:57.772632  105113 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1025 21:30:57.772638  105113 command_runner.go:130] > # cpuset = 0
	I1025 21:30:57.772643  105113 command_runner.go:130] > # cpushares = "0-1"
	I1025 21:30:57.772649  105113 command_runner.go:130] > # Where:
	I1025 21:30:57.772654  105113 command_runner.go:130] > # The workload name is workload-type.
	I1025 21:30:57.772663  105113 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1025 21:30:57.772671  105113 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1025 21:30:57.772677  105113 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1025 21:30:57.772688  105113 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1025 21:30:57.772696  105113 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1025 21:30:57.772703  105113 command_runner.go:130] > # 
	I1025 21:30:57.772709  105113 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1025 21:30:57.772715  105113 command_runner.go:130] > #
	I1025 21:30:57.772721  105113 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1025 21:30:57.772730  105113 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1025 21:30:57.772738  105113 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1025 21:30:57.772747  105113 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1025 21:30:57.772755  105113 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1025 21:30:57.772759  105113 command_runner.go:130] > [crio.image]
	I1025 21:30:57.772767  105113 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1025 21:30:57.772772  105113 command_runner.go:130] > # default_transport = "docker://"
	I1025 21:30:57.772781  105113 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1025 21:30:57.772789  105113 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1025 21:30:57.772795  105113 command_runner.go:130] > # global_auth_file = ""
	I1025 21:30:57.772801  105113 command_runner.go:130] > # The image used to instantiate infra containers.
	I1025 21:30:57.772808  105113 command_runner.go:130] > # This option supports live configuration reload.
	I1025 21:30:57.772816  105113 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1025 21:30:57.772825  105113 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1025 21:30:57.772833  105113 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1025 21:30:57.772840  105113 command_runner.go:130] > # This option supports live configuration reload.
	I1025 21:30:57.772845  105113 command_runner.go:130] > # pause_image_auth_file = ""
	I1025 21:30:57.772854  105113 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1025 21:30:57.772862  105113 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1025 21:30:57.772870  105113 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1025 21:30:57.772878  105113 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1025 21:30:57.772885  105113 command_runner.go:130] > # pause_command = "/pause"
	I1025 21:30:57.772891  105113 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1025 21:30:57.772900  105113 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1025 21:30:57.772908  105113 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1025 21:30:57.772917  105113 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1025 21:30:57.772924  105113 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1025 21:30:57.772928  105113 command_runner.go:130] > # signature_policy = ""
	I1025 21:30:57.772940  105113 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1025 21:30:57.772949  105113 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1025 21:30:57.772956  105113 command_runner.go:130] > # changing them here.
	I1025 21:30:57.772960  105113 command_runner.go:130] > # insecure_registries = [
	I1025 21:30:57.772966  105113 command_runner.go:130] > # ]
	I1025 21:30:57.772972  105113 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1025 21:30:57.772979  105113 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1025 21:30:57.772987  105113 command_runner.go:130] > # image_volumes = "mkdir"
	I1025 21:30:57.772992  105113 command_runner.go:130] > # Temporary directory to use for storing big files
	I1025 21:30:57.772998  105113 command_runner.go:130] > # big_files_temporary_dir = ""
	I1025 21:30:57.773004  105113 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1025 21:30:57.773011  105113 command_runner.go:130] > # CNI plugins.
	I1025 21:30:57.773015  105113 command_runner.go:130] > [crio.network]
	I1025 21:30:57.773022  105113 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1025 21:30:57.773029  105113 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1025 21:30:57.773036  105113 command_runner.go:130] > # cni_default_network = ""
	I1025 21:30:57.773041  105113 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1025 21:30:57.773048  105113 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1025 21:30:57.773054  105113 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1025 21:30:57.773060  105113 command_runner.go:130] > # plugin_dirs = [
	I1025 21:30:57.773065  105113 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1025 21:30:57.773072  105113 command_runner.go:130] > # ]
	I1025 21:30:57.773078  105113 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1025 21:30:57.773084  105113 command_runner.go:130] > [crio.metrics]
	I1025 21:30:57.773089  105113 command_runner.go:130] > # Globally enable or disable metrics support.
	I1025 21:30:57.773096  105113 command_runner.go:130] > # enable_metrics = false
	I1025 21:30:57.773100  105113 command_runner.go:130] > # Specify enabled metrics collectors.
	I1025 21:30:57.773107  105113 command_runner.go:130] > # Per default all metrics are enabled.
	I1025 21:30:57.773113  105113 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1025 21:30:57.773121  105113 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1025 21:30:57.773129  105113 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1025 21:30:57.773134  105113 command_runner.go:130] > # metrics_collectors = [
	I1025 21:30:57.773141  105113 command_runner.go:130] > # 	"operations",
	I1025 21:30:57.773145  105113 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1025 21:30:57.773152  105113 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1025 21:30:57.773156  105113 command_runner.go:130] > # 	"operations_errors",
	I1025 21:30:57.773163  105113 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1025 21:30:57.773167  105113 command_runner.go:130] > # 	"image_pulls_by_name",
	I1025 21:30:57.773174  105113 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1025 21:30:57.773178  105113 command_runner.go:130] > # 	"image_pulls_failures",
	I1025 21:30:57.773184  105113 command_runner.go:130] > # 	"image_pulls_successes",
	I1025 21:30:57.773188  105113 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1025 21:30:57.773194  105113 command_runner.go:130] > # 	"image_layer_reuse",
	I1025 21:30:57.773198  105113 command_runner.go:130] > # 	"containers_oom_total",
	I1025 21:30:57.773204  105113 command_runner.go:130] > # 	"containers_oom",
	I1025 21:30:57.773209  105113 command_runner.go:130] > # 	"processes_defunct",
	I1025 21:30:57.773215  105113 command_runner.go:130] > # 	"operations_total",
	I1025 21:30:57.773220  105113 command_runner.go:130] > # 	"operations_latency_seconds",
	I1025 21:30:57.773226  105113 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1025 21:30:57.773231  105113 command_runner.go:130] > # 	"operations_errors_total",
	I1025 21:30:57.773237  105113 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1025 21:30:57.773242  105113 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1025 21:30:57.773248  105113 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1025 21:30:57.773253  105113 command_runner.go:130] > # 	"image_pulls_success_total",
	I1025 21:30:57.773260  105113 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1025 21:30:57.773264  105113 command_runner.go:130] > # 	"containers_oom_count_total",
	I1025 21:30:57.773270  105113 command_runner.go:130] > # ]
	I1025 21:30:57.773276  105113 command_runner.go:130] > # The port on which the metrics server will listen.
	I1025 21:30:57.773282  105113 command_runner.go:130] > # metrics_port = 9090
	I1025 21:30:57.773287  105113 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1025 21:30:57.773294  105113 command_runner.go:130] > # metrics_socket = ""
	I1025 21:30:57.773299  105113 command_runner.go:130] > # The certificate for the secure metrics server.
	I1025 21:30:57.773307  105113 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1025 21:30:57.773315  105113 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1025 21:30:57.773322  105113 command_runner.go:130] > # certificate on any modification event.
	I1025 21:30:57.773328  105113 command_runner.go:130] > # metrics_cert = ""
	I1025 21:30:57.773333  105113 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1025 21:30:57.773341  105113 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1025 21:30:57.773347  105113 command_runner.go:130] > # metrics_key = ""
	I1025 21:30:57.773353  105113 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1025 21:30:57.773359  105113 command_runner.go:130] > [crio.tracing]
	I1025 21:30:57.773364  105113 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1025 21:30:57.773368  105113 command_runner.go:130] > # enable_tracing = false
	I1025 21:30:57.773376  105113 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1025 21:30:57.773381  105113 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1025 21:30:57.773388  105113 command_runner.go:130] > # Number of samples to collect per million spans.
	I1025 21:30:57.773393  105113 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1025 21:30:57.773400  105113 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1025 21:30:57.773404  105113 command_runner.go:130] > [crio.stats]
	I1025 21:30:57.773412  105113 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1025 21:30:57.773420  105113 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1025 21:30:57.773424  105113 command_runner.go:130] > # stats_collection_period = 0
	I1025 21:30:57.773487  105113 cni.go:84] Creating CNI manager for ""
	I1025 21:30:57.773496  105113 cni.go:136] 2 nodes found, recommending kindnet
	I1025 21:30:57.773504  105113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 21:30:57.773523  105113 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-874778 NodeName:multinode-874778-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 21:30:57.773626  105113 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-874778-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 21:30:57.773673  105113 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-874778-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-874778 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 21:30:57.773718  105113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 21:30:57.780984  105113 command_runner.go:130] > kubeadm
	I1025 21:30:57.780998  105113 command_runner.go:130] > kubectl
	I1025 21:30:57.781003  105113 command_runner.go:130] > kubelet
	I1025 21:30:57.781577  105113 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 21:30:57.781637  105113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1025 21:30:57.789005  105113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1025 21:30:57.804395  105113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 21:30:57.819796  105113 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1025 21:30:57.822837  105113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:30:57.832314  105113 host.go:66] Checking if "multinode-874778" exists ...
	I1025 21:30:57.832602  105113 config.go:182] Loaded profile config "multinode-874778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:30:57.832573  105113 start.go:304] JoinCluster: &{Name:multinode-874778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-874778 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:30:57.832649  105113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1025 21:30:57.832691  105113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:30:57.848617  105113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa Username:docker}
	I1025 21:30:57.986018  105113 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token oqget7.g5zk94mj0ob7u8gh --discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 
	I1025 21:30:57.989736  105113 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1025 21:30:57.989783  105113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oqget7.g5zk94mj0ob7u8gh --discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-874778-m02"
	I1025 21:30:58.022490  105113 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 21:30:58.050877  105113 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1025 21:30:58.050904  105113 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-gcp
	I1025 21:30:58.050912  105113 command_runner.go:130] > OS: Linux
	I1025 21:30:58.050922  105113 command_runner.go:130] > CGROUPS_CPU: enabled
	I1025 21:30:58.050932  105113 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1025 21:30:58.050940  105113 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1025 21:30:58.050949  105113 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1025 21:30:58.050959  105113 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1025 21:30:58.050968  105113 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1025 21:30:58.050978  105113 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1025 21:30:58.050989  105113 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1025 21:30:58.051001  105113 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1025 21:30:58.127377  105113 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 21:30:58.127399  105113 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1025 21:30:58.150666  105113 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 21:30:58.150696  105113 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 21:30:58.150704  105113 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1025 21:30:58.220711  105113 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1025 21:31:00.232715  105113 command_runner.go:130] > This node has joined the cluster:
	I1025 21:31:00.232736  105113 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1025 21:31:00.232743  105113 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1025 21:31:00.232749  105113 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1025 21:31:00.235147  105113 command_runner.go:130] ! W1025 21:30:58.022005    1111 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1025 21:31:00.235184  105113 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1025 21:31:00.235201  105113 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 21:31:00.235235  105113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oqget7.g5zk94mj0ob7u8gh --discovery-token-ca-cert-hash sha256:81aa62e087573fa9098e2a57ea7cc4407ea343d82712bf34cdaff83258d6f892 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-874778-m02": (2.24542103s)
	I1025 21:31:00.235261  105113 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1025 21:31:00.399314  105113 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1025 21:31:00.399346  105113 start.go:306] JoinCluster complete in 2.566771525s
	I1025 21:31:00.399360  105113 cni.go:84] Creating CNI manager for ""
	I1025 21:31:00.399367  105113 cni.go:136] 2 nodes found, recommending kindnet
	I1025 21:31:00.399431  105113 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 21:31:00.402947  105113 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1025 21:31:00.402971  105113 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1025 21:31:00.402980  105113 command_runner.go:130] > Device: 33h/51d	Inode: 555944      Links: 1
	I1025 21:31:00.402986  105113 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 21:31:00.402992  105113 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1025 21:31:00.402997  105113 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1025 21:31:00.403001  105113 command_runner.go:130] > Change: 2023-10-25 21:11:12.434356897 +0000
	I1025 21:31:00.403006  105113 command_runner.go:130] >  Birth: 2023-10-25 21:11:12.410354451 +0000
	I1025 21:31:00.403053  105113 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1025 21:31:00.403062  105113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 21:31:00.418524  105113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 21:31:00.632651  105113 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1025 21:31:00.632674  105113 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1025 21:31:00.632679  105113 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1025 21:31:00.632684  105113 command_runner.go:130] > daemonset.apps/kindnet configured
	I1025 21:31:00.633001  105113 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:31:00.633197  105113 kapi.go:59] client config for multinode-874778: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.key", CAFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:31:00.633513  105113 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 21:31:00.633531  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:00.633541  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:00.633550  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:00.635699  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:00.635721  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:00.635731  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:00 GMT
	I1025 21:31:00.635740  105113 round_trippers.go:580]     Audit-Id: 15d0d7b5-6e4a-4f78-83df-41f2fde093a0
	I1025 21:31:00.635752  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:00.635764  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:00.635775  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:00.635786  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:00.635797  105113 round_trippers.go:580]     Content-Length: 291
	I1025 21:31:00.635831  105113 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bae3cddd-1c77-4771-90f1-9a4c1aff3e13","resourceVersion":"447","creationTimestamp":"2023-10-25T21:29:58Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1025 21:31:00.635921  105113 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-874778" context rescaled to 1 replicas
	I1025 21:31:00.635947  105113 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1025 21:31:00.639502  105113 out.go:177] * Verifying Kubernetes components...
	I1025 21:31:00.641076  105113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:31:00.652583  105113 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:31:00.652812  105113 kapi.go:59] client config for multinode-874778: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/profiles/multinode-874778/client.key", CAFile:"/home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:31:00.653055  105113 node_ready.go:35] waiting up to 6m0s for node "multinode-874778-m02" to be "Ready" ...
	I1025 21:31:00.653130  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778-m02
	I1025 21:31:00.653141  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:00.653152  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:00.653162  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:00.655312  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:00.655371  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:00.655383  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:00.655392  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:00.655398  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:00 GMT
	I1025 21:31:00.655406  105113 round_trippers.go:580]     Audit-Id: 6ab73368-f779-4489-83bb-cf187fb96b85
	I1025 21:31:00.655414  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:00.655419  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:00.655517  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778-m02","uid":"9a7dae57-4f22-43ad-9c16-5ffa18ce9805","resourceVersion":"483","creationTimestamp":"2023-10-25T21:31:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1025 21:31:00.655877  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778-m02
	I1025 21:31:00.655892  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:00.655900  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:00.655908  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:00.657412  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:31:00.657427  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:00.657433  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:00.657438  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:00.657443  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:00.657449  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:00.657456  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:00 GMT
	I1025 21:31:00.657464  105113 round_trippers.go:580]     Audit-Id: 3c842701-d76e-4f78-95f2-d216d0cd08bd
	I1025 21:31:00.657566  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778-m02","uid":"9a7dae57-4f22-43ad-9c16-5ffa18ce9805","resourceVersion":"483","creationTimestamp":"2023-10-25T21:31:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1025 21:31:01.158299  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778-m02
	I1025 21:31:01.158325  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:01.158335  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:01.158341  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:01.160902  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:01.160926  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:01.160937  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:01.160951  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:01.160960  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:01.160970  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:01 GMT
	I1025 21:31:01.160982  105113 round_trippers.go:580]     Audit-Id: 4710c20e-fa39-4a30-a360-7a002705e18a
	I1025 21:31:01.160990  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:01.161116  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778-m02","uid":"9a7dae57-4f22-43ad-9c16-5ffa18ce9805","resourceVersion":"483","creationTimestamp":"2023-10-25T21:31:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1025 21:31:01.658925  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778-m02
	I1025 21:31:01.658949  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:01.658956  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:01.658963  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:01.661112  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:01.661134  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:01.661140  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:01.661146  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:01.661150  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:01.661156  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:01.661165  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:01 GMT
	I1025 21:31:01.661174  105113 round_trippers.go:580]     Audit-Id: ca7f0923-005a-4204-b22a-a58b6b524961
	I1025 21:31:01.661373  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778-m02","uid":"9a7dae57-4f22-43ad-9c16-5ffa18ce9805","resourceVersion":"483","creationTimestamp":"2023-10-25T21:31:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1025 21:31:02.159023  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778-m02
	I1025 21:31:02.159043  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.159051  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.159057  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.161120  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:02.161139  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.161148  105113 round_trippers.go:580]     Audit-Id: 4d0ee57a-7393-4ca4-8035-160e30a39d93
	I1025 21:31:02.161157  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.161163  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.161171  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.161179  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.161188  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.161287  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778-m02","uid":"9a7dae57-4f22-43ad-9c16-5ffa18ce9805","resourceVersion":"502","creationTimestamp":"2023-10-25T21:31:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1025 21:31:02.161578  105113 node_ready.go:49] node "multinode-874778-m02" has status "Ready":"True"
	I1025 21:31:02.161596  105113 node_ready.go:38] duration metric: took 1.508526379s waiting for node "multinode-874778-m02" to be "Ready" ...
	I1025 21:31:02.161607  105113 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:31:02.161663  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1025 21:31:02.161674  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.161684  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.161694  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.164483  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:02.164504  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.164523  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.164533  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.164540  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.164549  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.164559  105113 round_trippers.go:580]     Audit-Id: 36258f2b-aa0c-4f7b-83e3-b7094e7134af
	I1025 21:31:02.164571  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.165170  105113 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"503"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knfr2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aaf2ddcd-6832-4476-b04f-12e4fdd933b8","resourceVersion":"442","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"141ac871-bb5e-4c1b-8ac4-12316f895547","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"141ac871-bb5e-4c1b-8ac4-12316f895547\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I1025 21:31:02.168131  105113 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-knfr2" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.168210  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knfr2
	I1025 21:31:02.168222  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.168233  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.168243  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.169842  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:31:02.169861  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.169869  105113 round_trippers.go:580]     Audit-Id: a2684c15-b7a0-4673-b0be-3feaaf58c88b
	I1025 21:31:02.169877  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.169888  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.169895  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.169904  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.169913  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.170055  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knfr2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aaf2ddcd-6832-4476-b04f-12e4fdd933b8","resourceVersion":"442","creationTimestamp":"2023-10-25T21:30:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"141ac871-bb5e-4c1b-8ac4-12316f895547","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:30:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"141ac871-bb5e-4c1b-8ac4-12316f895547\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1025 21:31:02.170464  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:31:02.170477  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.170484  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.170490  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.172261  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:31:02.172276  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.172284  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.172294  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.172306  105113 round_trippers.go:580]     Audit-Id: df904b7c-1524-42d9-b41e-dcb2cc22ed8d
	I1025 21:31:02.172315  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.172323  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.172332  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.172441  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:31:02.172745  105113 pod_ready.go:92] pod "coredns-5dd5756b68-knfr2" in "kube-system" namespace has status "Ready":"True"
	I1025 21:31:02.172760  105113 pod_ready.go:81] duration metric: took 4.604373ms waiting for pod "coredns-5dd5756b68-knfr2" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.172767  105113 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.172822  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-874778
	I1025 21:31:02.172833  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.172843  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.172854  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.174326  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:31:02.174341  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.174347  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.174354  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.174360  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.174365  105113 round_trippers.go:580]     Audit-Id: f034ec41-bcfe-476f-bd76-348aa3e2c4c2
	I1025 21:31:02.174370  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.174375  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.174505  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-874778","namespace":"kube-system","uid":"732babe1-d90c-4663-bbbc-acbca47036e2","resourceVersion":"323","creationTimestamp":"2023-10-25T21:29:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0013ce498834fae8862b745666dfa45e","kubernetes.io/config.mirror":"0013ce498834fae8862b745666dfa45e","kubernetes.io/config.seen":"2023-10-25T21:29:58.930794088Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:29:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1025 21:31:02.174872  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:31:02.174886  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.174893  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.174902  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.176436  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:31:02.176454  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.176462  105113 round_trippers.go:580]     Audit-Id: cf9b5379-1430-4661-9075-9c5454230617
	I1025 21:31:02.176469  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.176482  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.176493  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.176510  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.176518  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.176638  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:31:02.176949  105113 pod_ready.go:92] pod "etcd-multinode-874778" in "kube-system" namespace has status "Ready":"True"
	I1025 21:31:02.176963  105113 pod_ready.go:81] duration metric: took 4.190202ms waiting for pod "etcd-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.176976  105113 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.177016  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-874778
	I1025 21:31:02.177041  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.177047  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.177053  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.178537  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:31:02.178554  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.178563  105113 round_trippers.go:580]     Audit-Id: 7f16a99c-888d-46e6-b457-9b55c6198125
	I1025 21:31:02.178570  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.178578  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.178586  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.178598  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.178607  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.178718  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-874778","namespace":"kube-system","uid":"ef34869e-ca49-4a2c-96c7-7f7e9bc648d2","resourceVersion":"317","creationTimestamp":"2023-10-25T21:29:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"fb48415dd7ff02ca5565298cd5179555","kubernetes.io/config.mirror":"fb48415dd7ff02ca5565298cd5179555","kubernetes.io/config.seen":"2023-10-25T21:29:58.930800852Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:29:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1025 21:31:02.179102  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:31:02.179116  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.179127  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.179135  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.180525  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:31:02.180538  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.180544  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.180550  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.180554  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.180559  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.180564  105113 round_trippers.go:580]     Audit-Id: d382c711-43e0-4ef8-8877-8b84074ae293
	I1025 21:31:02.180572  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.180655  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:31:02.180919  105113 pod_ready.go:92] pod "kube-apiserver-multinode-874778" in "kube-system" namespace has status "Ready":"True"
	I1025 21:31:02.180933  105113 pod_ready.go:81] duration metric: took 3.950104ms waiting for pod "kube-apiserver-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.180941  105113 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.180984  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-874778
	I1025 21:31:02.180992  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.180998  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.181004  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.182580  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:31:02.182597  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.182603  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.182608  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.182614  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.182619  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.182624  105113 round_trippers.go:580]     Audit-Id: e6fbf788-8d64-4ae5-ab0e-7bfbf00166ff
	I1025 21:31:02.182629  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.182784  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-874778","namespace":"kube-system","uid":"29064f70-ec6c-4d84-ab29-55aa9fdf9013","resourceVersion":"315","creationTimestamp":"2023-10-25T21:29:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fbcd628f4f0d0a61cdf2115088b35d26","kubernetes.io/config.mirror":"fbcd628f4f0d0a61cdf2115088b35d26","kubernetes.io/config.seen":"2023-10-25T21:29:53.419053110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:29:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1025 21:31:02.183155  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:31:02.183168  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.183174  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.183180  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.184548  105113 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 21:31:02.184571  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.184581  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.184590  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.184597  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.184615  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.184623  105113 round_trippers.go:580]     Audit-Id: 258235a9-1553-432e-9c6f-cbd1d1502e5d
	I1025 21:31:02.184632  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.184713  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:31:02.184945  105113 pod_ready.go:92] pod "kube-controller-manager-multinode-874778" in "kube-system" namespace has status "Ready":"True"
	I1025 21:31:02.184955  105113 pod_ready.go:81] duration metric: took 4.009766ms waiting for pod "kube-controller-manager-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.184963  105113 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m2c87" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.359367  105113 request.go:629] Waited for 174.345092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m2c87
	I1025 21:31:02.359443  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m2c87
	I1025 21:31:02.359452  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.359464  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.359478  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.361644  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:02.361667  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.361675  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.361680  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.361686  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.361691  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.361696  105113 round_trippers.go:580]     Audit-Id: f7d911a7-12c4-491b-aea0-76ad64c4eb4e
	I1025 21:31:02.361701  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.361899  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m2c87","generateName":"kube-proxy-","namespace":"kube-system","uid":"8a034be9-561b-4031-8c3a-e9f208dabc41","resourceVersion":"496","creationTimestamp":"2023-10-25T21:31:00Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"704a299d-d94f-4e3e-a6f8-08ba8cf233bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"704a299d-d94f-4e3e-a6f8-08ba8cf233bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1025 21:31:02.559714  105113 request.go:629] Waited for 197.349993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-874778-m02
	I1025 21:31:02.559773  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778-m02
	I1025 21:31:02.559779  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.559787  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.559805  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.561973  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:02.561991  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.561997  105113 round_trippers.go:580]     Audit-Id: 4035deaa-2408-4699-bf50-3f133fc56f6d
	I1025 21:31:02.562007  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.562012  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.562017  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.562022  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.562027  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.562134  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778-m02","uid":"9a7dae57-4f22-43ad-9c16-5ffa18ce9805","resourceVersion":"502","creationTimestamp":"2023-10-25T21:31:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1025 21:31:02.562458  105113 pod_ready.go:92] pod "kube-proxy-m2c87" in "kube-system" namespace has status "Ready":"True"
	I1025 21:31:02.562476  105113 pod_ready.go:81] duration metric: took 377.507212ms waiting for pod "kube-proxy-m2c87" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.562489  105113 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-msn2q" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.759928  105113 request.go:629] Waited for 197.375204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msn2q
	I1025 21:31:02.759997  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msn2q
	I1025 21:31:02.760004  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.760011  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.760022  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.762103  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:02.762124  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.762134  105113 round_trippers.go:580]     Audit-Id: f8694e5a-0a6e-4926-8d55-8dbb6e7f1b92
	I1025 21:31:02.762141  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.762148  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.762156  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.762165  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.762176  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.762326  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msn2q","generateName":"kube-proxy-","namespace":"kube-system","uid":"75b8f03b-41ea-45cd-9128-daed81df1ecc","resourceVersion":"402","creationTimestamp":"2023-10-25T21:30:11Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"704a299d-d94f-4e3e-a6f8-08ba8cf233bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:30:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"704a299d-d94f-4e3e-a6f8-08ba8cf233bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1025 21:31:02.959020  105113 request.go:629] Waited for 196.279824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:31:02.959085  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:31:02.959090  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:02.959102  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:02.959117  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:02.961280  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:02.961305  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:02.961316  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:02 GMT
	I1025 21:31:02.961325  105113 round_trippers.go:580]     Audit-Id: 268860c2-bd3e-4011-b710-6176037148d1
	I1025 21:31:02.961334  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:02.961346  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:02.961356  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:02.961377  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:02.961491  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:31:02.961847  105113 pod_ready.go:92] pod "kube-proxy-msn2q" in "kube-system" namespace has status "Ready":"True"
	I1025 21:31:02.961863  105113 pod_ready.go:81] duration metric: took 399.366389ms waiting for pod "kube-proxy-msn2q" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:02.961876  105113 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:03.159224  105113 request.go:629] Waited for 197.282615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-874778
	I1025 21:31:03.159293  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-874778
	I1025 21:31:03.159298  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:03.159305  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:03.159319  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:03.161801  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:03.161823  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:03.161832  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:03.161840  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:03.161855  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:03 GMT
	I1025 21:31:03.161863  105113 round_trippers.go:580]     Audit-Id: 88039c20-4c92-4e52-b607-7e796faa9195
	I1025 21:31:03.161871  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:03.161940  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:03.162070  105113 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-874778","namespace":"kube-system","uid":"946650c6-c5ab-4c2a-8904-f989727728c7","resourceVersion":"397","creationTimestamp":"2023-10-25T21:29:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9fc5bdc35c58829ebbeb6e7aac44e301","kubernetes.io/config.mirror":"9fc5bdc35c58829ebbeb6e7aac44e301","kubernetes.io/config.seen":"2023-10-25T21:29:58.930804447Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-25T21:29:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1025 21:31:03.359864  105113 request.go:629] Waited for 197.35561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:31:03.359942  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-874778
	I1025 21:31:03.359947  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:03.359958  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:03.359968  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:03.362418  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:03.362443  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:03.362453  105113 round_trippers.go:580]     Audit-Id: 6b9673a4-0ec8-463f-b98a-de3c42da4381
	I1025 21:31:03.362463  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:03.362472  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:03.362480  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:03.362490  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:03.362501  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:03 GMT
	I1025 21:31:03.362636  105113 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-25T21:29:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1025 21:31:03.362973  105113 pod_ready.go:92] pod "kube-scheduler-multinode-874778" in "kube-system" namespace has status "Ready":"True"
	I1025 21:31:03.362988  105113 pod_ready.go:81] duration metric: took 401.10587ms waiting for pod "kube-scheduler-multinode-874778" in "kube-system" namespace to be "Ready" ...
	I1025 21:31:03.362998  105113 pod_ready.go:38] duration metric: took 1.201380797s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:31:03.363015  105113 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 21:31:03.363081  105113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:31:03.374158  105113 system_svc.go:56] duration metric: took 11.137484ms WaitForService to wait for kubelet.
	I1025 21:31:03.374178  105113 kubeadm.go:581] duration metric: took 2.738211216s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 21:31:03.374195  105113 node_conditions.go:102] verifying NodePressure condition ...
	I1025 21:31:03.559445  105113 request.go:629] Waited for 185.184959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1025 21:31:03.559506  105113 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1025 21:31:03.559514  105113 round_trippers.go:469] Request Headers:
	I1025 21:31:03.559525  105113 round_trippers.go:473]     Accept: application/json, */*
	I1025 21:31:03.559538  105113 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1025 21:31:03.561656  105113 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 21:31:03.561674  105113 round_trippers.go:577] Response Headers:
	I1025 21:31:03.561680  105113 round_trippers.go:580]     Content-Type: application/json
	I1025 21:31:03.561686  105113 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e0b64c-8b53-4e91-ac17-57d54562cb81
	I1025 21:31:03.561691  105113 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ff65622-d73d-44e6-99c5-7ca9cbcda733
	I1025 21:31:03.561696  105113 round_trippers.go:580]     Date: Wed, 25 Oct 2023 21:31:03 GMT
	I1025 21:31:03.561701  105113 round_trippers.go:580]     Audit-Id: d41fbb62-6bd0-4e72-bce5-af6f3e7fd503
	I1025 21:31:03.561711  105113 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 21:31:03.561923  105113 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"503"},"items":[{"metadata":{"name":"multinode-874778","uid":"98fd4ee4-b004-4d1b-91ad-3430ca2100bc","resourceVersion":"424","creationTimestamp":"2023-10-25T21:29:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-874778","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-874778","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T21_29_59_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I1025 21:31:03.562453  105113 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 21:31:03.562470  105113 node_conditions.go:123] node cpu capacity is 8
	I1025 21:31:03.562478  105113 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 21:31:03.562482  105113 node_conditions.go:123] node cpu capacity is 8
	I1025 21:31:03.562486  105113 node_conditions.go:105] duration metric: took 188.287401ms to run NodePressure ...
	I1025 21:31:03.562496  105113 start.go:228] waiting for startup goroutines ...
	I1025 21:31:03.562520  105113 start.go:242] writing updated cluster config ...
	I1025 21:31:03.562790  105113 ssh_runner.go:195] Run: rm -f paused
	I1025 21:31:03.607646  105113 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1025 21:31:03.611081  105113 out.go:177] * Done! kubectl is now configured to use "multinode-874778" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 25 21:30:43 multinode-874778 crio[957]: time="2023-10-25 21:30:43.894709856Z" level=info msg="Starting container: 578b6c48302dfcc5d68726bafeca06a9618138748c881c7ed65f9b16125ff6cd" id=73fa2b8d-cb57-4a74-836e-8b7e1966aceb name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 21:30:43 multinode-874778 crio[957]: time="2023-10-25 21:30:43.895309320Z" level=info msg="Created container 0b51c0e543360505e76a9634a0779de7a54d9324057af7c62605ddc48e82dd5b: kube-system/coredns-5dd5756b68-knfr2/coredns" id=48e3d686-3177-4f22-986a-a675cb9e2cb4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 21:30:43 multinode-874778 crio[957]: time="2023-10-25 21:30:43.895858361Z" level=info msg="Starting container: 0b51c0e543360505e76a9634a0779de7a54d9324057af7c62605ddc48e82dd5b" id=9101bfb5-c2c2-41d9-9426-1b5fc5e5086f name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 21:30:43 multinode-874778 crio[957]: time="2023-10-25 21:30:43.903518874Z" level=info msg="Started container" PID=2350 containerID=578b6c48302dfcc5d68726bafeca06a9618138748c881c7ed65f9b16125ff6cd description=kube-system/storage-provisioner/storage-provisioner id=73fa2b8d-cb57-4a74-836e-8b7e1966aceb name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ee82fcea7bffdf8c9837a90adaf87202355a3449059eb6b0090f8ee3a64cb1d
	Oct 25 21:30:43 multinode-874778 crio[957]: time="2023-10-25 21:30:43.927044681Z" level=info msg="Started container" PID=2356 containerID=0b51c0e543360505e76a9634a0779de7a54d9324057af7c62605ddc48e82dd5b description=kube-system/coredns-5dd5756b68-knfr2/coredns id=9101bfb5-c2c2-41d9-9426-1b5fc5e5086f name=/runtime.v1.RuntimeService/StartContainer sandboxID=0e612ab4eda198c04bead5262c3f5728918675b5b6faa53c2e9832b7e1b293a1
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.595839267Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-2z62q/POD" id=b4bf5c90-fcf6-46e9-9eea-bf3ab5d2e51e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.595893779Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.610834444Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-2z62q Namespace:default ID:bf2355d4886fc97d9e663cfad66f26accd86262121844bc055363be90d0ff44d UID:557fd86c-a683-4c9c-8a95-d174ce4b0aaa NetNS:/var/run/netns/7318b765-4541-4abe-9aa7-3e3f69f3bbc7 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.610883650Z" level=info msg="Adding pod default_busybox-5bc68d56bd-2z62q to CNI network \"kindnet\" (type=ptp)"
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.619419166Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-2z62q Namespace:default ID:bf2355d4886fc97d9e663cfad66f26accd86262121844bc055363be90d0ff44d UID:557fd86c-a683-4c9c-8a95-d174ce4b0aaa NetNS:/var/run/netns/7318b765-4541-4abe-9aa7-3e3f69f3bbc7 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.619540956Z" level=info msg="Checking pod default_busybox-5bc68d56bd-2z62q for CNI network kindnet (type=ptp)"
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.641521870Z" level=info msg="Ran pod sandbox bf2355d4886fc97d9e663cfad66f26accd86262121844bc055363be90d0ff44d with infra container: default/busybox-5bc68d56bd-2z62q/POD" id=b4bf5c90-fcf6-46e9-9eea-bf3ab5d2e51e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.642579828Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=08742622-3b7b-4748-90cd-df2e6d8b9759 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.642794264Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=08742622-3b7b-4748-90cd-df2e6d8b9759 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.643611768Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=f4a5318f-d7a3-461e-b0ff-0f4aa7a9a1dd name=/runtime.v1.ImageService/PullImage
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.648267824Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 25 21:31:04 multinode-874778 crio[957]: time="2023-10-25 21:31:04.903337419Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 25 21:31:05 multinode-874778 crio[957]: time="2023-10-25 21:31:05.445372573Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=f4a5318f-d7a3-461e-b0ff-0f4aa7a9a1dd name=/runtime.v1.ImageService/PullImage
	Oct 25 21:31:05 multinode-874778 crio[957]: time="2023-10-25 21:31:05.446336512Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=d9c32265-3433-4556-927d-568cd51686e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 21:31:05 multinode-874778 crio[957]: time="2023-10-25 21:31:05.447360601Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d9c32265-3433-4556-927d-568cd51686e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 21:31:05 multinode-874778 crio[957]: time="2023-10-25 21:31:05.448080130Z" level=info msg="Creating container: default/busybox-5bc68d56bd-2z62q/busybox" id=4eccaa18-324d-43f7-afcd-01aef766b132 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 21:31:05 multinode-874778 crio[957]: time="2023-10-25 21:31:05.448162473Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 25 21:31:05 multinode-874778 crio[957]: time="2023-10-25 21:31:05.536521967Z" level=info msg="Created container 1a38605360e6e4d64135682a108c65e5ff732bdd0fb19a0f036527337fe3cb0e: default/busybox-5bc68d56bd-2z62q/busybox" id=4eccaa18-324d-43f7-afcd-01aef766b132 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 21:31:05 multinode-874778 crio[957]: time="2023-10-25 21:31:05.537801570Z" level=info msg="Starting container: 1a38605360e6e4d64135682a108c65e5ff732bdd0fb19a0f036527337fe3cb0e" id=30023dbf-5add-48b0-a06e-fe6c42ed77e7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 21:31:05 multinode-874778 crio[957]: time="2023-10-25 21:31:05.546412274Z" level=info msg="Started container" PID=2522 containerID=1a38605360e6e4d64135682a108c65e5ff732bdd0fb19a0f036527337fe3cb0e description=default/busybox-5bc68d56bd-2z62q/busybox id=30023dbf-5add-48b0-a06e-fe6c42ed77e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf2355d4886fc97d9e663cfad66f26accd86262121844bc055363be90d0ff44d
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1a38605360e6e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   bf2355d4886fc       busybox-5bc68d56bd-2z62q
	0b51c0e543360       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      25 seconds ago       Running             coredns                   0                   0e612ab4eda19       coredns-5dd5756b68-knfr2
	578b6c48302df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      25 seconds ago       Running             storage-provisioner       0                   7ee82fcea7bff       storage-provisioner
	3a585be067dee       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      57 seconds ago       Running             kindnet-cni               0                   4467500de0f3c       kindnet-2542b
	03d30d9d33b5a       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      57 seconds ago       Running             kube-proxy                0                   ec11cfc776684       kube-proxy-msn2q
	95b811d02b098       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      About a minute ago   Running             kube-scheduler            0                   82d3cb80ebcbe       kube-scheduler-multinode-874778
	091a242f2c713       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   0387f8fcb5d5c       etcd-multinode-874778
	88d339d4a0421       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      About a minute ago   Running             kube-apiserver            0                   78f8aad49409a       kube-apiserver-multinode-874778
	a23e234e2142c       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      About a minute ago   Running             kube-controller-manager   0                   3952d3e8811e8       kube-controller-manager-multinode-874778
	
	* 
	* ==> coredns [0b51c0e543360505e76a9634a0779de7a54d9324057af7c62605ddc48e82dd5b] <==
	* [INFO] 10.244.0.3:53402 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069008s
	[INFO] 10.244.1.2:35612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115424s
	[INFO] 10.244.1.2:40181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001658368s
	[INFO] 10.244.1.2:42085 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050413s
	[INFO] 10.244.1.2:44254 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081382s
	[INFO] 10.244.1.2:57665 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001133715s
	[INFO] 10.244.1.2:52382 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051658s
	[INFO] 10.244.1.2:51268 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066063s
	[INFO] 10.244.1.2:59570 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054585s
	[INFO] 10.244.0.3:56889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108876s
	[INFO] 10.244.0.3:40643 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092003s
	[INFO] 10.244.0.3:37447 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064532s
	[INFO] 10.244.0.3:36505 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005321s
	[INFO] 10.244.1.2:40113 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112506s
	[INFO] 10.244.1.2:37783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009012s
	[INFO] 10.244.1.2:59350 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000052983s
	[INFO] 10.244.1.2:47825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052399s
	[INFO] 10.244.0.3:50179 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012312s
	[INFO] 10.244.0.3:38901 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009677s
	[INFO] 10.244.0.3:42251 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078252s
	[INFO] 10.244.0.3:57213 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056538s
	[INFO] 10.244.1.2:40660 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143204s
	[INFO] 10.244.1.2:57758 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087498s
	[INFO] 10.244.1.2:53820 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059004s
	[INFO] 10.244.1.2:46744 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079232s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-874778
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-874778
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=multinode-874778
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T21_29_59_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:29:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-874778
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 21:30:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:30:43 +0000   Wed, 25 Oct 2023 21:29:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:30:43 +0000   Wed, 25 Oct 2023 21:29:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:30:43 +0000   Wed, 25 Oct 2023 21:29:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 21:30:43 +0000   Wed, 25 Oct 2023 21:30:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-874778
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 5db76a7925d94859886b71e70144b6e9
	  System UUID:                1956936b-cb10-4f40-a1db-a9abbcd2cf48
	  Boot ID:                    34092eb3-c5c2-47c9-ae8c-38e7a764813a
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-2z62q                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5dd5756b68-knfr2                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     57s
	  kube-system                 etcd-multinode-874778                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 kindnet-2542b                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      58s
	  kube-system                 kube-apiserver-multinode-874778             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-multinode-874778    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-msn2q                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-multinode-874778             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node multinode-874778 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node multinode-874778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x8 over 76s)  kubelet          Node multinode-874778 status is now: NodeHasSufficientPID
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node multinode-874778 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node multinode-874778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s                kubelet          Node multinode-874778 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s                node-controller  Node multinode-874778 event: Registered Node multinode-874778 in Controller
	  Normal  NodeReady                26s                kubelet          Node multinode-874778 status is now: NodeReady
	
	
	Name:               multinode-874778-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-874778-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:31:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-874778-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:31:01 +0000   Wed, 25 Oct 2023 21:31:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:31:01 +0000   Wed, 25 Oct 2023 21:31:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:31:01 +0000   Wed, 25 Oct 2023 21:31:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 21:31:01 +0000   Wed, 25 Oct 2023 21:31:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-874778-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 26d88c689c5a412892bf678c74a8c5e2
	  System UUID:                31df7405-a31f-424a-ab14-384725258bac
	  Boot ID:                    34092eb3-c5c2-47c9-ae8c-38e7a764813a
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-xh8tr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-8j6rn               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-m2c87            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 8s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9s (x5 over 11s)  kubelet          Node multinode-874778-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 11s)  kubelet          Node multinode-874778-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 11s)  kubelet          Node multinode-874778-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8s                node-controller  Node multinode-874778-m02 event: Registered Node multinode-874778-m02 in Controller
	  Normal  NodeReady                8s                kubelet          Node multinode-874778-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004949] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006561] FS-Cache: N-cookie d=00000000021fa65a{9p.inode} n=00000000dab2db8b
	[  +0.008738] FS-Cache: N-key=[8] '78a00f0200000000'
	[  +0.308810] FS-Cache: Duplicate cookie detected
	[  +0.004670] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006750] FS-Cache: O-cookie d=00000000021fa65a{9p.inode} n=0000000092b40cea
	[  +0.007363] FS-Cache: O-key=[8] '81a00f0200000000'
	[  +0.004955] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007980] FS-Cache: N-cookie d=00000000021fa65a{9p.inode} n=00000000471260ab
	[  +0.008707] FS-Cache: N-key=[8] '81a00f0200000000'
	[Oct25 21:20] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct25 21:22] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[  +1.016105] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[  +2.015781] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[  +4.159580] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[  +8.195126] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[ +16.122408] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	[Oct25 21:23] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 96 f0 01 76 c5 7f 02 55 af 5f 22 f8 08 00
	
	* 
	* ==> etcd [091a242f2c7138ab3caf65e692a2dba0016985124a1d75b91b54b6d0330db21a] <==
	* {"level":"info","ts":"2023-10-25T21:29:54.227308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-10-25T21:29:54.227785Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-25T21:29:54.229496Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-25T21:29:54.229598Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-25T21:29:54.229632Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-25T21:29:54.229727Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-25T21:29:54.229774Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-25T21:29:54.655593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-25T21:29:54.655648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-25T21:29:54.655676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-25T21:29:54.6557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-25T21:29:54.655706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-25T21:29:54.655714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-25T21:29:54.65572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-25T21:29:54.65672Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:29:54.657425Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T21:29:54.657422Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-874778 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-25T21:29:54.657454Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T21:29:54.657693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-25T21:29:54.657758Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-25T21:29:54.657814Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:29:54.657971Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:29:54.658045Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:29:54.658784Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-25T21:29:54.658836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  21:31:09 up  1:13,  0 users,  load average: 0.70, 1.06, 0.78
	Linux multinode-874778 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [3a585be067deecfadaaeceac751e2a144e3f94fa360e13fecb1c9fd1065b6707] <==
	* I1025 21:30:12.930727       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1025 21:30:12.930896       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1025 21:30:12.931031       1 main.go:116] setting mtu 1500 for CNI 
	I1025 21:30:12.931078       1 main.go:146] kindnetd IP family: "ipv4"
	I1025 21:30:12.931105       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1025 21:30:43.162451       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1025 21:30:43.171096       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1025 21:30:43.171124       1 main.go:227] handling current node
	I1025 21:30:53.176778       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1025 21:30:53.176801       1 main.go:227] handling current node
	I1025 21:31:03.189053       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1025 21:31:03.189078       1 main.go:227] handling current node
	I1025 21:31:03.189087       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1025 21:31:03.189091       1 main.go:250] Node multinode-874778-m02 has CIDR [10.244.1.0/24] 
	I1025 21:31:03.189239       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [88d339d4a0421aaaa656ef01ba3e9c8ab5ba254b199dac17a025b044f9db4d8f] <==
	* I1025 21:29:56.229722       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 21:29:56.230246       1 aggregator.go:166] initial CRD sync complete...
	I1025 21:29:56.230312       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 21:29:56.230361       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 21:29:56.230391       1 cache.go:39] Caches are synced for autoregister controller
	I1025 21:29:56.233598       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 21:29:56.233659       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 21:29:56.233906       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 21:29:56.234634       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 21:29:56.326849       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 21:29:57.082976       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 21:29:57.087573       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 21:29:57.087591       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 21:29:57.457427       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 21:29:57.489450       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 21:29:57.557167       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 21:29:57.564729       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1025 21:29:57.565604       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 21:29:57.568980       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 21:29:58.251158       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 21:29:58.845804       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 21:29:58.856202       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 21:29:58.865331       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1025 21:30:11.831984       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1025 21:30:11.859899       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [a23e234e2142ccadc3a5ffbef5a088182839c3edfa1bd358c71f04190cc0f802] <==
	* I1025 21:30:43.497374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.911µs"
	I1025 21:30:44.088353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.351µs"
	I1025 21:30:44.103492       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.669938ms"
	I1025 21:30:44.103609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.586µs"
	I1025 21:30:46.757253       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1025 21:31:00.170508       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-874778-m02\" does not exist"
	I1025 21:31:00.174666       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-874778-m02" podCIDRs=["10.244.1.0/24"]
	I1025 21:31:00.183028       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-m2c87"
	I1025 21:31:00.183052       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8j6rn"
	I1025 21:31:01.759323       1 event.go:307] "Event occurred" object="multinode-874778-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-874778-m02 event: Registered Node multinode-874778-m02 in Controller"
	I1025 21:31:01.759387       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-874778-m02"
	I1025 21:31:01.919343       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-874778-m02"
	I1025 21:31:04.277014       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1025 21:31:04.283753       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-xh8tr"
	I1025 21:31:04.287803       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-2z62q"
	I1025 21:31:04.293408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.576169ms"
	I1025 21:31:04.301523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.988882ms"
	I1025 21:31:04.301678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.795µs"
	I1025 21:31:04.301736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.065µs"
	I1025 21:31:04.307079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.967µs"
	I1025 21:31:05.700850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.772285ms"
	I1025 21:31:05.700928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="40µs"
	I1025 21:31:06.137908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.708935ms"
	I1025 21:31:06.137981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.24µs"
	I1025 21:31:06.768361       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-xh8tr" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-xh8tr"
	
	* 
	* ==> kube-proxy [03d30d9d33b5a7aa70644bfa629bccfdc43308a52e47da740e0302c76a47bfa0] <==
	* I1025 21:30:12.937785       1 server_others.go:69] "Using iptables proxy"
	I1025 21:30:12.950998       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1025 21:30:13.051113       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 21:30:13.053204       1 server_others.go:152] "Using iptables Proxier"
	I1025 21:30:13.053239       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 21:30:13.053248       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 21:30:13.053278       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 21:30:13.053499       1 server.go:846] "Version info" version="v1.28.3"
	I1025 21:30:13.053509       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:30:13.054129       1 config.go:188] "Starting service config controller"
	I1025 21:30:13.054156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 21:30:13.054204       1 config.go:315] "Starting node config controller"
	I1025 21:30:13.054205       1 config.go:97] "Starting endpoint slice config controller"
	I1025 21:30:13.054229       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 21:30:13.054217       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 21:30:13.154756       1 shared_informer.go:318] Caches are synced for node config
	I1025 21:30:13.154766       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 21:30:13.154790       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [95b811d02b098e5ffab4b5bc671f2bfc82cbad9081af3c419548fe9c70e4882b] <==
	* W1025 21:29:56.337867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 21:29:56.338543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 21:29:56.337949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:29:56.338615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 21:29:56.338107       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 21:29:56.338177       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:29:56.338747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:29:56.338780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 21:29:56.339184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 21:29:56.339255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 21:29:57.184116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 21:29:57.184158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 21:29:57.200193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 21:29:57.200226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 21:29:57.221737       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 21:29:57.221779       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 21:29:57.253286       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 21:29:57.253346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 21:29:57.270534       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:29:57.270570       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 21:29:57.294818       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 21:29:57.294852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 21:29:57.327326       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:29:57.327365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1025 21:30:00.029145       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 25 21:30:12 multinode-874778 kubelet[1591]: I1025 21:30:12.028731    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b8f03b-41ea-45cd-9128-daed81df1ecc-xtables-lock\") pod \"kube-proxy-msn2q\" (UID: \"75b8f03b-41ea-45cd-9128-daed81df1ecc\") " pod="kube-system/kube-proxy-msn2q"
	Oct 25 21:30:12 multinode-874778 kubelet[1591]: I1025 21:30:12.028778    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b8f03b-41ea-45cd-9128-daed81df1ecc-lib-modules\") pod \"kube-proxy-msn2q\" (UID: \"75b8f03b-41ea-45cd-9128-daed81df1ecc\") " pod="kube-system/kube-proxy-msn2q"
	Oct 25 21:30:12 multinode-874778 kubelet[1591]: I1025 21:30:12.028870    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0664fe89-7c36-4f5c-ad60-0dbb8f47c413-lib-modules\") pod \"kindnet-2542b\" (UID: \"0664fe89-7c36-4f5c-ad60-0dbb8f47c413\") " pod="kube-system/kindnet-2542b"
	Oct 25 21:30:12 multinode-874778 kubelet[1591]: I1025 21:30:12.028902    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0664fe89-7c36-4f5c-ad60-0dbb8f47c413-cni-cfg\") pod \"kindnet-2542b\" (UID: \"0664fe89-7c36-4f5c-ad60-0dbb8f47c413\") " pod="kube-system/kindnet-2542b"
	Oct 25 21:30:12 multinode-874778 kubelet[1591]: I1025 21:30:12.028940    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gp8k\" (UniqueName: \"kubernetes.io/projected/0664fe89-7c36-4f5c-ad60-0dbb8f47c413-kube-api-access-7gp8k\") pod \"kindnet-2542b\" (UID: \"0664fe89-7c36-4f5c-ad60-0dbb8f47c413\") " pod="kube-system/kindnet-2542b"
	Oct 25 21:30:12 multinode-874778 kubelet[1591]: I1025 21:30:12.029025    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khbdl\" (UniqueName: \"kubernetes.io/projected/75b8f03b-41ea-45cd-9128-daed81df1ecc-kube-api-access-khbdl\") pod \"kube-proxy-msn2q\" (UID: \"75b8f03b-41ea-45cd-9128-daed81df1ecc\") " pod="kube-system/kube-proxy-msn2q"
	Oct 25 21:30:12 multinode-874778 kubelet[1591]: W1025 21:30:12.328474    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/crio-ec11cfc77668415c9605331662777d5e90021dbb38a558afa5d206bbbfebd0f7 WatchSource:0}: Error finding container ec11cfc77668415c9605331662777d5e90021dbb38a558afa5d206bbbfebd0f7: Status 404 returned error can't find the container with id ec11cfc77668415c9605331662777d5e90021dbb38a558afa5d206bbbfebd0f7
	Oct 25 21:30:12 multinode-874778 kubelet[1591]: W1025 21:30:12.329098    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/crio-4467500de0f3c5d0e0fb1484981d618ff22e3700f37f5141bb657c2e63f308b5 WatchSource:0}: Error finding container 4467500de0f3c5d0e0fb1484981d618ff22e3700f37f5141bb657c2e63f308b5: Status 404 returned error can't find the container with id 4467500de0f3c5d0e0fb1484981d618ff22e3700f37f5141bb657c2e63f308b5
	Oct 25 21:30:13 multinode-874778 kubelet[1591]: I1025 21:30:13.043113    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2542b" podStartSLOduration=2.043045435 podCreationTimestamp="2023-10-25 21:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 21:30:13.042760964 +0000 UTC m=+14.222213305" watchObservedRunningTime="2023-10-25 21:30:13.043045435 +0000 UTC m=+14.222497795"
	Oct 25 21:30:13 multinode-874778 kubelet[1591]: I1025 21:30:13.056403    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-msn2q" podStartSLOduration=2.056349605 podCreationTimestamp="2023-10-25 21:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 21:30:13.056143212 +0000 UTC m=+14.235595572" watchObservedRunningTime="2023-10-25 21:30:13.056349605 +0000 UTC m=+14.235801964"
	Oct 25 21:30:43 multinode-874778 kubelet[1591]: I1025 21:30:43.461436    1591 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 25 21:30:43 multinode-874778 kubelet[1591]: I1025 21:30:43.481854    1591 topology_manager.go:215] "Topology Admit Handler" podUID="5e3d74f9-b847-40f1-b4bd-9f5e09f1249e" podNamespace="kube-system" podName="storage-provisioner"
	Oct 25 21:30:43 multinode-874778 kubelet[1591]: I1025 21:30:43.483382    1591 topology_manager.go:215] "Topology Admit Handler" podUID="aaf2ddcd-6832-4476-b04f-12e4fdd933b8" podNamespace="kube-system" podName="coredns-5dd5756b68-knfr2"
	Oct 25 21:30:43 multinode-874778 kubelet[1591]: I1025 21:30:43.643622    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb2l7\" (UniqueName: \"kubernetes.io/projected/aaf2ddcd-6832-4476-b04f-12e4fdd933b8-kube-api-access-pb2l7\") pod \"coredns-5dd5756b68-knfr2\" (UID: \"aaf2ddcd-6832-4476-b04f-12e4fdd933b8\") " pod="kube-system/coredns-5dd5756b68-knfr2"
	Oct 25 21:30:43 multinode-874778 kubelet[1591]: I1025 21:30:43.643679    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5e3d74f9-b847-40f1-b4bd-9f5e09f1249e-tmp\") pod \"storage-provisioner\" (UID: \"5e3d74f9-b847-40f1-b4bd-9f5e09f1249e\") " pod="kube-system/storage-provisioner"
	Oct 25 21:30:43 multinode-874778 kubelet[1591]: I1025 21:30:43.643714    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h55vs\" (UniqueName: \"kubernetes.io/projected/5e3d74f9-b847-40f1-b4bd-9f5e09f1249e-kube-api-access-h55vs\") pod \"storage-provisioner\" (UID: \"5e3d74f9-b847-40f1-b4bd-9f5e09f1249e\") " pod="kube-system/storage-provisioner"
	Oct 25 21:30:43 multinode-874778 kubelet[1591]: I1025 21:30:43.643794    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaf2ddcd-6832-4476-b04f-12e4fdd933b8-config-volume\") pod \"coredns-5dd5756b68-knfr2\" (UID: \"aaf2ddcd-6832-4476-b04f-12e4fdd933b8\") " pod="kube-system/coredns-5dd5756b68-knfr2"
	Oct 25 21:30:43 multinode-874778 kubelet[1591]: W1025 21:30:43.831169    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/crio-7ee82fcea7bffdf8c9837a90adaf87202355a3449059eb6b0090f8ee3a64cb1d WatchSource:0}: Error finding container 7ee82fcea7bffdf8c9837a90adaf87202355a3449059eb6b0090f8ee3a64cb1d: Status 404 returned error can't find the container with id 7ee82fcea7bffdf8c9837a90adaf87202355a3449059eb6b0090f8ee3a64cb1d
	Oct 25 21:30:43 multinode-874778 kubelet[1591]: W1025 21:30:43.831448    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/crio-0e612ab4eda198c04bead5262c3f5728918675b5b6faa53c2e9832b7e1b293a1 WatchSource:0}: Error finding container 0e612ab4eda198c04bead5262c3f5728918675b5b6faa53c2e9832b7e1b293a1: Status 404 returned error can't find the container with id 0e612ab4eda198c04bead5262c3f5728918675b5b6faa53c2e9832b7e1b293a1
	Oct 25 21:30:44 multinode-874778 kubelet[1591]: I1025 21:30:44.088424    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-knfr2" podStartSLOduration=32.088371851 podCreationTimestamp="2023-10-25 21:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 21:30:44.087953918 +0000 UTC m=+45.267406278" watchObservedRunningTime="2023-10-25 21:30:44.088371851 +0000 UTC m=+45.267824211"
	Oct 25 21:30:44 multinode-874778 kubelet[1591]: I1025 21:30:44.105535    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.105475589 podCreationTimestamp="2023-10-25 21:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 21:30:44.105454482 +0000 UTC m=+45.284906840" watchObservedRunningTime="2023-10-25 21:30:44.105475589 +0000 UTC m=+45.284927947"
	Oct 25 21:31:04 multinode-874778 kubelet[1591]: I1025 21:31:04.292896    1591 topology_manager.go:215] "Topology Admit Handler" podUID="557fd86c-a683-4c9c-8a95-d174ce4b0aaa" podNamespace="default" podName="busybox-5bc68d56bd-2z62q"
	Oct 25 21:31:04 multinode-874778 kubelet[1591]: I1025 21:31:04.459515    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmfzr\" (UniqueName: \"kubernetes.io/projected/557fd86c-a683-4c9c-8a95-d174ce4b0aaa-kube-api-access-dmfzr\") pod \"busybox-5bc68d56bd-2z62q\" (UID: \"557fd86c-a683-4c9c-8a95-d174ce4b0aaa\") " pod="default/busybox-5bc68d56bd-2z62q"
	Oct 25 21:31:04 multinode-874778 kubelet[1591]: W1025 21:31:04.639030    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/crio-bf2355d4886fc97d9e663cfad66f26accd86262121844bc055363be90d0ff44d WatchSource:0}: Error finding container bf2355d4886fc97d9e663cfad66f26accd86262121844bc055363be90d0ff44d: Status 404 returned error can't find the container with id bf2355d4886fc97d9e663cfad66f26accd86262121844bc055363be90d0ff44d
	Oct 25 21:31:06 multinode-874778 kubelet[1591]: I1025 21:31:06.133363    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-2z62q" podStartSLOduration=1.330503592 podCreationTimestamp="2023-10-25 21:31:04 +0000 UTC" firstStartedPulling="2023-10-25 21:31:04.642988133 +0000 UTC m=+65.822440484" lastFinishedPulling="2023-10-25 21:31:05.445810364 +0000 UTC m=+66.625262705" observedRunningTime="2023-10-25 21:31:06.133083617 +0000 UTC m=+67.312535976" watchObservedRunningTime="2023-10-25 21:31:06.133325813 +0000 UTC m=+67.312778171"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-874778 -n multinode-874778
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-874778 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.4071989561.exe start -p running-upgrade-088634 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.4071989561.exe start -p running-upgrade-088634 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.146233628s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-088634 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-088634 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.405344754s)

                                                
                                                
-- stdout --
	* [running-upgrade-088634] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-088634 in cluster running-upgrade-088634
	* Pulling base image ...
	* Updating the running docker "running-upgrade-088634" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:43:21.739368  195025 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:43:21.739631  195025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:43:21.739640  195025 out.go:309] Setting ErrFile to fd 2...
	I1025 21:43:21.739645  195025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:43:21.739888  195025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:43:21.740545  195025 out.go:303] Setting JSON to false
	I1025 21:43:21.742375  195025 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5151,"bootTime":1698265051,"procs":524,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:43:21.742462  195025 start.go:138] virtualization: kvm guest
	I1025 21:43:21.745590  195025 out.go:177] * [running-upgrade-088634] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:43:21.747165  195025 notify.go:220] Checking for updates...
	I1025 21:43:21.747167  195025 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:43:21.748650  195025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:43:21.750244  195025 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:43:21.751689  195025 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:43:21.753152  195025 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:43:21.754581  195025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:43:21.756662  195025 config.go:182] Loaded profile config "running-upgrade-088634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1025 21:43:21.756686  195025 start_flags.go:701] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 21:43:21.759254  195025 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1025 21:43:21.760779  195025 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:43:21.788279  195025 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:43:21.788359  195025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:43:21.852888  195025 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:91 SystemTime:2023-10-25 21:43:21.84217788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:43:21.853035  195025 docker.go:295] overlay module found
	I1025 21:43:21.855451  195025 out.go:177] * Using the docker driver based on existing profile
	I1025 21:43:21.856984  195025 start.go:298] selected driver: docker
	I1025 21:43:21.856997  195025 start.go:902] validating driver "docker" against &{Name:running-upgrade-088634 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-088634 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1025 21:43:21.857074  195025 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:43:21.857896  195025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:43:21.924066  195025 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:91 SystemTime:2023-10-25 21:43:21.912345815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:43:21.924473  195025 cni.go:84] Creating CNI manager for ""
	I1025 21:43:21.924499  195025 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1025 21:43:21.924513  195025 start_flags.go:323] config:
	{Name:running-upgrade-088634 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-088634 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1025 21:43:21.926888  195025 out.go:177] * Starting control plane node running-upgrade-088634 in cluster running-upgrade-088634
	I1025 21:43:21.928363  195025 cache.go:121] Beginning downloading kic base image for docker with crio
	I1025 21:43:21.929675  195025 out.go:177] * Pulling base image ...
	I1025 21:43:21.930994  195025 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1025 21:43:21.931104  195025 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:43:21.950620  195025 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 21:43:21.950655  195025 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	W1025 21:43:21.963058  195025 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1025 21:43:21.963243  195025 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/running-upgrade-088634/config.json ...
	I1025 21:43:21.963356  195025 cache.go:107] acquiring lock: {Name:mk514d9d0d40ab639c75f12b0a0fc9351220f63e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:21.963397  195025 cache.go:107] acquiring lock: {Name:mka6d5a7ff688dca8be8d1762e7286442873e8d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:21.963404  195025 cache.go:107] acquiring lock: {Name:mkcf1d8595ab9a8466488222ddf6759d30cc7ae0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:21.963460  195025 cache.go:107] acquiring lock: {Name:mka4777f5224132bc4ca8178cbb01ecfc922d149 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:21.963471  195025 cache.go:107] acquiring lock: {Name:mk77f8ddf4f01d722258159908d08454f02958d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:21.963443  195025 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1025 21:43:21.963514  195025 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1025 21:43:21.963515  195025 cache.go:194] Successfully downloaded all kic artifacts
	I1025 21:43:21.963515  195025 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 176.703µs
	I1025 21:43:21.963527  195025 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1025 21:43:21.963513  195025 cache.go:107] acquiring lock: {Name:mkd5596fc9f68382cbb7f6e29b71be0885e73c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:21.963523  195025 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 66.766µs
	I1025 21:43:21.963541  195025 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1025 21:43:21.963538  195025 start.go:365] acquiring machines lock for running-upgrade-088634: {Name:mk81bba44a53e98e5948ae1247cf2f1e5bd513f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:21.963407  195025 cache.go:107] acquiring lock: {Name:mk70dee3324a7f3d1164c5df362d09f981a721c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:21.963558  195025 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1025 21:43:21.963558  195025 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1025 21:43:21.963497  195025 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1025 21:43:21.963583  195025 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 72.191µs
	I1025 21:43:21.963573  195025 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 100.248µs
	I1025 21:43:21.963497  195025 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1025 21:43:21.963612  195025 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1025 21:43:21.963606  195025 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1025 21:43:21.963358  195025 cache.go:107] acquiring lock: {Name:mk97c995b2a9ce652c4d189bfc5e5d7c9020ad9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:21.963621  195025 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 243.106µs
	I1025 21:43:21.963632  195025 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1025 21:43:21.963614  195025 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1025 21:43:21.963587  195025 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 192.096µs
	I1025 21:43:21.963642  195025 start.go:369] acquired machines lock for "running-upgrade-088634" in 91.3µs
	I1025 21:43:21.963652  195025 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 21:43:21.963656  195025 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:43:21.963660  195025 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 314.487µs
	I1025 21:43:21.963670  195025 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 21:43:21.963663  195025 fix.go:54] fixHost starting: m01
	I1025 21:43:21.963644  195025 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1025 21:43:21.963618  195025 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 239.164µs
	I1025 21:43:21.963730  195025 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1025 21:43:21.963739  195025 cache.go:87] Successfully saved all images to host disk.
	I1025 21:43:21.963966  195025 cli_runner.go:164] Run: docker container inspect running-upgrade-088634 --format={{.State.Status}}
	I1025 21:43:21.982582  195025 fix.go:102] recreateIfNeeded on running-upgrade-088634: state=Running err=<nil>
	W1025 21:43:21.982619  195025 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 21:43:21.986223  195025 out.go:177] * Updating the running docker "running-upgrade-088634" container ...
	I1025 21:43:21.987607  195025 machine.go:88] provisioning docker machine ...
	I1025 21:43:21.987644  195025 ubuntu.go:169] provisioning hostname "running-upgrade-088634"
	I1025 21:43:21.987698  195025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-088634
	I1025 21:43:22.011559  195025 main.go:141] libmachine: Using SSH client type: native
	I1025 21:43:22.011978  195025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1025 21:43:22.012002  195025 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-088634 && echo "running-upgrade-088634" | sudo tee /etc/hostname
	I1025 21:43:22.132104  195025 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-088634
	
	I1025 21:43:22.132193  195025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-088634
	I1025 21:43:22.153751  195025 main.go:141] libmachine: Using SSH client type: native
	I1025 21:43:22.154138  195025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1025 21:43:22.154161  195025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-088634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-088634/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-088634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:43:22.266256  195025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:43:22.266323  195025 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17488-11542/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-11542/.minikube}
	I1025 21:43:22.266360  195025 ubuntu.go:177] setting up certificates
	I1025 21:43:22.266368  195025 provision.go:83] configureAuth start
	I1025 21:43:22.266424  195025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-088634
	I1025 21:43:22.284088  195025 provision.go:138] copyHostCerts
	I1025 21:43:22.284138  195025 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem, removing ...
	I1025 21:43:22.284151  195025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem
	I1025 21:43:22.284212  195025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem (1078 bytes)
	I1025 21:43:22.284310  195025 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem, removing ...
	I1025 21:43:22.284320  195025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem
	I1025 21:43:22.284343  195025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem (1123 bytes)
	I1025 21:43:22.284394  195025 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem, removing ...
	I1025 21:43:22.284403  195025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem
	I1025 21:43:22.284424  195025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem (1675 bytes)
	I1025 21:43:22.284465  195025 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-088634 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-088634]
	I1025 21:43:22.350465  195025 provision.go:172] copyRemoteCerts
	I1025 21:43:22.350526  195025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:43:22.350560  195025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-088634
	I1025 21:43:22.367477  195025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/running-upgrade-088634/id_rsa Username:docker}
	I1025 21:43:22.451609  195025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 21:43:22.469868  195025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 21:43:22.487944  195025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 21:43:22.505782  195025 provision.go:86] duration metric: configureAuth took 239.399868ms
	I1025 21:43:22.505819  195025 ubuntu.go:193] setting minikube options for container-runtime
	I1025 21:43:22.505997  195025 config.go:182] Loaded profile config "running-upgrade-088634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1025 21:43:22.506101  195025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-088634
	I1025 21:43:22.523014  195025 main.go:141] libmachine: Using SSH client type: native
	I1025 21:43:22.523554  195025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1025 21:43:22.523590  195025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 21:43:22.951875  195025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 21:43:22.951901  195025 machine.go:91] provisioned docker machine in 964.270123ms
	I1025 21:43:22.951913  195025 start.go:300] post-start starting for "running-upgrade-088634" (driver="docker")
	I1025 21:43:22.951926  195025 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:43:22.952007  195025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:43:22.952051  195025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-088634
	I1025 21:43:22.969634  195025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/running-upgrade-088634/id_rsa Username:docker}
	I1025 21:43:23.049552  195025 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:43:23.052306  195025 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 21:43:23.052327  195025 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 21:43:23.052337  195025 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 21:43:23.052346  195025 info.go:137] Remote host: Ubuntu 19.10
	I1025 21:43:23.052358  195025 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/addons for local assets ...
	I1025 21:43:23.052433  195025 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/files for local assets ...
	I1025 21:43:23.052545  195025 filesync.go:149] local asset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> 183232.pem in /etc/ssl/certs
	I1025 21:43:23.052693  195025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 21:43:23.059451  195025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem --> /etc/ssl/certs/183232.pem (1708 bytes)
	I1025 21:43:23.077170  195025 start.go:303] post-start completed in 125.242547ms
	I1025 21:43:23.077259  195025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:43:23.077305  195025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-088634
	I1025 21:43:23.093902  195025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/running-upgrade-088634/id_rsa Username:docker}
	I1025 21:43:23.174799  195025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:43:23.178590  195025 fix.go:56] fixHost completed within 1.214918794s
	I1025 21:43:23.178611  195025 start.go:83] releasing machines lock for "running-upgrade-088634", held for 1.214960358s
	I1025 21:43:23.178679  195025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-088634
	I1025 21:43:23.197815  195025 ssh_runner.go:195] Run: cat /version.json
	I1025 21:43:23.197882  195025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-088634
	I1025 21:43:23.197967  195025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:43:23.198097  195025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-088634
	I1025 21:43:23.216341  195025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/running-upgrade-088634/id_rsa Username:docker}
	I1025 21:43:23.218174  195025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/running-upgrade-088634/id_rsa Username:docker}
	W1025 21:43:23.297440  195025 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1025 21:43:23.297524  195025 ssh_runner.go:195] Run: systemctl --version
	I1025 21:43:23.335961  195025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 21:43:23.400429  195025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 21:43:23.404595  195025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:43:23.425523  195025 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1025 21:43:23.425662  195025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:43:23.517669  195025 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 21:43:23.517693  195025 start.go:472] detecting cgroup driver to use...
	I1025 21:43:23.517722  195025 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 21:43:23.517767  195025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 21:43:23.538047  195025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:43:23.549427  195025 docker.go:198] disabling cri-docker service (if available) ...
	I1025 21:43:23.549479  195025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:43:23.559711  195025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:43:23.569229  195025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1025 21:43:23.577853  195025 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1025 21:43:23.577911  195025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:43:23.651315  195025 docker.go:214] disabling docker service ...
	I1025 21:43:23.651382  195025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:43:23.661262  195025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:43:23.670679  195025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:43:23.735506  195025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:43:23.808381  195025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:43:23.817715  195025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:43:23.871549  195025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1025 21:43:23.871625  195025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:43:23.973790  195025 out.go:177] 
	W1025 21:43:24.035503  195025 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1025 21:43:24.035544  195025 out.go:239] * 
	* 
	W1025 21:43:24.036701  195025 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:43:24.057537  195025 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-088634 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-25 21:43:24.090542304 +0000 UTC m=+1949.648761612
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-088634
helpers_test.go:235: (dbg) docker inspect running-upgrade-088634:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e97b80804bbcf2c2e4f321c4c450281a8a1b1a2a887cfb32cc6a67ef83fd666",
	        "Created": "2023-10-25T21:42:16.887350773Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 179966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-25T21:42:17.334972487Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/7e97b80804bbcf2c2e4f321c4c450281a8a1b1a2a887cfb32cc6a67ef83fd666/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e97b80804bbcf2c2e4f321c4c450281a8a1b1a2a887cfb32cc6a67ef83fd666/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e97b80804bbcf2c2e4f321c4c450281a8a1b1a2a887cfb32cc6a67ef83fd666/hosts",
	        "LogPath": "/var/lib/docker/containers/7e97b80804bbcf2c2e4f321c4c450281a8a1b1a2a887cfb32cc6a67ef83fd666/7e97b80804bbcf2c2e4f321c4c450281a8a1b1a2a887cfb32cc6a67ef83fd666-json.log",
	        "Name": "/running-upgrade-088634",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-088634:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b866173c323c0f1c4e37943c476a47e9c7aa867d6bade31d4e4054b881c8a775-init/diff:/var/lib/docker/overlay2/d3e38e7d767ca067c5316546d02f26c4661e6214c68709fa5fbc9295bd46f07b/diff:/var/lib/docker/overlay2/82062e64ca953c4605e07c6d8acdb6bf603851d401b6821d7e8e28426bc54eda/diff:/var/lib/docker/overlay2/e29ce509980a92e3c3d1e3e37970bc8e8ed733cf8c94212b4e6b52032ea4d767/diff:/var/lib/docker/overlay2/fbbef4e2b50913346872881b77beb3ddcd3af494a0136ae19bd40fca5253308f/diff:/var/lib/docker/overlay2/3e831488b1373f1c4cba0d50696538b8eabbc2bdeef815eb4df7b238ad8216c8/diff:/var/lib/docker/overlay2/fb10ebce8642f49df4194fe64d69df9b3bd2e067ee4d44bf419c40d489111f3f/diff:/var/lib/docker/overlay2/408817e8ebabc418112e2bcaf7701457ca0b290360327d511739b9e810fa80b0/diff:/var/lib/docker/overlay2/28a49a97f3e7959a9d08067825b13b88496a2dcb27d348659f8bc69d7ff04fef/diff:/var/lib/docker/overlay2/8a05ea5e9d27ad5e3297ce8b56a25562b9d43a69fd1c94ed6213d9aafc5477e7/diff:/var/lib/docker/overlay2/95a0f7
2e096198ea568ed4f6fa500b465f338f56dfba02081f32b3bbe7b09a3b/diff:/var/lib/docker/overlay2/234e7ec748c2adfb14f704f2b92da208815f8573bf8366685587454e51c256fb/diff:/var/lib/docker/overlay2/82db7d62d89ef0d8b9f49c9f643e5b66bc2523452af5ff914260d0a6ff9a4032/diff:/var/lib/docker/overlay2/b1e81a8a8779466077efeab9b40a0cb63909ddf74b7a56a7b1f3997500b3f89e/diff:/var/lib/docker/overlay2/7f5178937c39bb15320c393b17aa4a2db21934bc71243f91e982c9ba6b3bf0ac/diff:/var/lib/docker/overlay2/eadacba0891fd27b845d0ff04297cf1d1b7a693db71d62c3da17dfd17e0fc742/diff:/var/lib/docker/overlay2/072934e1ad1998d8f1e6c1452cc55c8a069a44aea856eb2a246531ffb8ae7fcc/diff:/var/lib/docker/overlay2/16973198e94ad7ba856e6a43df55d38b3b3608100a80a91adb3e86cf992ca9ed/diff:/var/lib/docker/overlay2/7f2f00df6e507f8ff58fe11c30fde4eec945f739733db67562ccc9a84cf3d823/diff:/var/lib/docker/overlay2/e21ac97501f6e5bb1d825e70953868fa98d684a3a6b61f639c5b161bf9c567e6/diff:/var/lib/docker/overlay2/ffaf214d2a81e0f34a03ad6b45322e00e29473f416679cec34a6409c7eb7e262/diff:/var/lib/d
ocker/overlay2/cf83c5fae7354370e5d39baa5474d246add76880a61034eabeb9693773b7dfbb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b866173c323c0f1c4e37943c476a47e9c7aa867d6bade31d4e4054b881c8a775/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b866173c323c0f1c4e37943c476a47e9c7aa867d6bade31d4e4054b881c8a775/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b866173c323c0f1c4e37943c476a47e9c7aa867d6bade31d4e4054b881c8a775/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-088634",
	                "Source": "/var/lib/docker/volumes/running-upgrade-088634/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-088634",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-088634",
	                "name.minikube.sigs.k8s.io": "running-upgrade-088634",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a2d4c65f0e7d6b9df8f366b566cfa969fb5798f14bac5856dfc851775f02206c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a2d4c65f0e7d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "88da580ca4160a56707c5d04a67ad5859c5b3e4da5a3e9647108bb5932a503a1",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "a635f54a9e67747f2a6fcc9a65e559a654e69ba8b89bd5f6c44d9bf197c40673",
	                    "EndpointID": "88da580ca4160a56707c5d04a67ad5859c5b3e4da5a3e9647108bb5932a503a1",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-088634 -n running-upgrade-088634
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-088634 -n running-upgrade-088634: exit status 4 (292.109198ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:43:24.368973  195633 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-088634" does not appear in /home/jenkins/minikube-integration/17488-11542/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-088634" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-088634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-088634
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-088634: (2.48636968s)
--- FAIL: TestRunningBinaryUpgrade (70.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.3406772443.exe start -p stopped-upgrade-893609 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1025 21:43:13.436303   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.3406772443.exe start -p stopped-upgrade-893609 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.02305777s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.3406772443.exe -p stopped-upgrade-893609 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.3406772443.exe -p stopped-upgrade-893609 stop: (3.477208139s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-893609 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-893609 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (8.787571609s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-893609] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-893609 in cluster stopped-upgrade-893609
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-893609" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:43:33.052857  198699 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:43:33.053107  198699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:43:33.053115  198699 out.go:309] Setting ErrFile to fd 2...
	I1025 21:43:33.053120  198699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:43:33.053303  198699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:43:33.053806  198699 out.go:303] Setting JSON to false
	I1025 21:43:33.059523  198699 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5162,"bootTime":1698265051,"procs":565,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:43:33.059587  198699 start.go:138] virtualization: kvm guest
	I1025 21:43:33.062232  198699 out.go:177] * [stopped-upgrade-893609] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:43:33.063851  198699 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:43:33.063923  198699 notify.go:220] Checking for updates...
	I1025 21:43:33.065484  198699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:43:33.067220  198699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:43:33.068774  198699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:43:33.070591  198699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:43:33.072153  198699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:43:33.074076  198699 config.go:182] Loaded profile config "stopped-upgrade-893609": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1025 21:43:33.074110  198699 start_flags.go:701] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 21:43:33.076369  198699 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1025 21:43:33.077681  198699 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:43:33.107726  198699 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:43:33.107796  198699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:43:33.172373  198699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:76 SystemTime:2023-10-25 21:43:33.159860472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:43:33.172472  198699 docker.go:295] overlay module found
	I1025 21:43:33.175067  198699 out.go:177] * Using the docker driver based on existing profile
	I1025 21:43:33.176662  198699 start.go:298] selected driver: docker
	I1025 21:43:33.176680  198699 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-893609 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-893609 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1025 21:43:33.177183  198699 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:43:33.178774  198699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:43:33.239613  198699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:70 SystemTime:2023-10-25 21:43:33.229535196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:43:33.240022  198699 cni.go:84] Creating CNI manager for ""
	I1025 21:43:33.240059  198699 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1025 21:43:33.240069  198699 start_flags.go:323] config:
	{Name:stopped-upgrade-893609 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-893609 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1025 21:43:33.242176  198699 out.go:177] * Starting control plane node stopped-upgrade-893609 in cluster stopped-upgrade-893609
	I1025 21:43:33.243929  198699 cache.go:121] Beginning downloading kic base image for docker with crio
	I1025 21:43:33.245317  198699 out.go:177] * Pulling base image ...
	I1025 21:43:33.246668  198699 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1025 21:43:33.246699  198699 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:43:33.265331  198699 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 21:43:33.265361  198699 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	W1025 21:43:33.271227  198699 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1025 21:43:33.271417  198699 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/stopped-upgrade-893609/config.json ...
	I1025 21:43:33.271485  198699 cache.go:107] acquiring lock: {Name:mk97c995b2a9ce652c4d189bfc5e5d7c9020ad9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:33.271500  198699 cache.go:107] acquiring lock: {Name:mkd5596fc9f68382cbb7f6e29b71be0885e73c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:33.271549  198699 cache.go:107] acquiring lock: {Name:mk514d9d0d40ab639c75f12b0a0fc9351220f63e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:33.271586  198699 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 21:43:33.271597  198699 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 131.327µs
	I1025 21:43:33.271594  198699 cache.go:107] acquiring lock: {Name:mka4777f5224132bc4ca8178cbb01ecfc922d149 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:33.271620  198699 cache.go:107] acquiring lock: {Name:mkcf1d8595ab9a8466488222ddf6759d30cc7ae0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:33.271639  198699 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1025 21:43:33.271649  198699 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 57.387µs
	I1025 21:43:33.271660  198699 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1025 21:43:33.271663  198699 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1025 21:43:33.271516  198699 cache.go:107] acquiring lock: {Name:mk70dee3324a7f3d1164c5df362d09f981a721c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:33.271674  198699 cache.go:194] Successfully downloaded all kic artifacts
	I1025 21:43:33.271697  198699 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1025 21:43:33.271680  198699 cache.go:107] acquiring lock: {Name:mka6d5a7ff688dca8be8d1762e7286442873e8d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:33.271705  198699 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 210.859µs
	I1025 21:43:33.271707  198699 start.go:365] acquiring machines lock for stopped-upgrade-893609: {Name:mk09c4afa719b898d628632701ab40a46314dbed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:33.271715  198699 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1025 21:43:33.271607  198699 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1025 21:43:33.271727  198699 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1025 21:43:33.271730  198699 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1025 21:43:33.271737  198699 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 191.672µs
	I1025 21:43:33.271750  198699 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1025 21:43:33.271726  198699 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 242.124µs
	I1025 21:43:33.271746  198699 cache.go:107] acquiring lock: {Name:mk77f8ddf4f01d722258159908d08454f02958d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:43:33.271764  198699 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1025 21:43:33.271666  198699 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 48.918µs
	I1025 21:43:33.271779  198699 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1025 21:43:33.271738  198699 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 60.27µs
	I1025 21:43:33.271781  198699 start.go:369] acquired machines lock for "stopped-upgrade-893609" in 58.31µs
	I1025 21:43:33.271791  198699 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1025 21:43:33.271607  198699 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 21:43:33.271795  198699 cache.go:115] /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1025 21:43:33.271810  198699 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:43:33.271819  198699 fix.go:54] fixHost starting: m01
	I1025 21:43:33.271806  198699 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 63.628µs
	I1025 21:43:33.271903  198699 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1025 21:43:33.271911  198699 cache.go:87] Successfully saved all images to host disk.
	I1025 21:43:33.272083  198699 cli_runner.go:164] Run: docker container inspect stopped-upgrade-893609 --format={{.State.Status}}
	I1025 21:43:33.309728  198699 fix.go:102] recreateIfNeeded on stopped-upgrade-893609: state=Stopped err=<nil>
	W1025 21:43:33.309772  198699 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 21:43:33.312416  198699 out.go:177] * Restarting existing docker container for "stopped-upgrade-893609" ...
	I1025 21:43:33.314071  198699 cli_runner.go:164] Run: docker start stopped-upgrade-893609
	I1025 21:43:33.703808  198699 cli_runner.go:164] Run: docker container inspect stopped-upgrade-893609 --format={{.State.Status}}
	I1025 21:43:33.745861  198699 kic.go:427] container "stopped-upgrade-893609" state is running.
	I1025 21:43:33.762554  198699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-893609
	I1025 21:43:33.791592  198699 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/stopped-upgrade-893609/config.json ...
	I1025 21:43:33.823383  198699 machine.go:88] provisioning docker machine ...
	I1025 21:43:33.823435  198699 ubuntu.go:169] provisioning hostname "stopped-upgrade-893609"
	I1025 21:43:33.823502  198699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-893609
	I1025 21:43:33.845338  198699 main.go:141] libmachine: Using SSH client type: native
	I1025 21:43:33.845695  198699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I1025 21:43:33.845706  198699 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-893609 && echo "stopped-upgrade-893609" | sudo tee /etc/hostname
	I1025 21:43:33.846442  198699 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50032->127.0.0.1:32979: read: connection reset by peer
	I1025 21:43:36.847934  198699 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 21:43:39.966247  198699 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-893609
	
	I1025 21:43:39.966344  198699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-893609
	I1025 21:43:39.983779  198699 main.go:141] libmachine: Using SSH client type: native
	I1025 21:43:39.984207  198699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I1025 21:43:39.984243  198699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-893609' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-893609/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-893609' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:43:40.089949  198699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:43:40.089976  198699 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17488-11542/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-11542/.minikube}
	I1025 21:43:40.090013  198699 ubuntu.go:177] setting up certificates
	I1025 21:43:40.090025  198699 provision.go:83] configureAuth start
	I1025 21:43:40.090070  198699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-893609
	I1025 21:43:40.107954  198699 provision.go:138] copyHostCerts
	I1025 21:43:40.108005  198699 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem, removing ...
	I1025 21:43:40.108017  198699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem
	I1025 21:43:40.108072  198699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/cert.pem (1123 bytes)
	I1025 21:43:40.108160  198699 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem, removing ...
	I1025 21:43:40.108169  198699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem
	I1025 21:43:40.108193  198699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/key.pem (1675 bytes)
	I1025 21:43:40.108255  198699 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem, removing ...
	I1025 21:43:40.108263  198699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem
	I1025 21:43:40.108282  198699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-11542/.minikube/ca.pem (1078 bytes)
	I1025 21:43:40.108336  198699 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-893609 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-893609]
	I1025 21:43:40.257673  198699 provision.go:172] copyRemoteCerts
	I1025 21:43:40.257757  198699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:43:40.257825  198699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-893609
	I1025 21:43:40.280079  198699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/stopped-upgrade-893609/id_rsa Username:docker}
	I1025 21:43:40.365818  198699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 21:43:40.385027  198699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 21:43:40.403833  198699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 21:43:40.421155  198699 provision.go:86] duration metric: configureAuth took 331.114314ms
	I1025 21:43:40.421184  198699 ubuntu.go:193] setting minikube options for container-runtime
	I1025 21:43:40.421401  198699 config.go:182] Loaded profile config "stopped-upgrade-893609": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1025 21:43:40.421520  198699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-893609
	I1025 21:43:40.439078  198699 main.go:141] libmachine: Using SSH client type: native
	I1025 21:43:40.439541  198699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I1025 21:43:40.439572  198699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 21:43:40.992770  198699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 21:43:40.992796  198699 machine.go:91] provisioned docker machine in 7.169389284s
	I1025 21:43:40.992808  198699 start.go:300] post-start starting for "stopped-upgrade-893609" (driver="docker")
	I1025 21:43:40.992820  198699 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:43:40.992890  198699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:43:40.992937  198699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-893609
	I1025 21:43:41.008736  198699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/stopped-upgrade-893609/id_rsa Username:docker}
	I1025 21:43:41.090200  198699 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:43:41.093022  198699 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 21:43:41.093049  198699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 21:43:41.093068  198699 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 21:43:41.093076  198699 info.go:137] Remote host: Ubuntu 19.10
	I1025 21:43:41.093086  198699 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/addons for local assets ...
	I1025 21:43:41.093146  198699 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-11542/.minikube/files for local assets ...
	I1025 21:43:41.093234  198699 filesync.go:149] local asset: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem -> 183232.pem in /etc/ssl/certs
	I1025 21:43:41.093345  198699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 21:43:41.099611  198699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/ssl/certs/183232.pem --> /etc/ssl/certs/183232.pem (1708 bytes)
	I1025 21:43:41.117031  198699 start.go:303] post-start completed in 124.210745ms
	I1025 21:43:41.117108  198699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:43:41.117157  198699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-893609
	I1025 21:43:41.134890  198699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/stopped-upgrade-893609/id_rsa Username:docker}
	I1025 21:43:41.214530  198699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:43:41.218129  198699 fix.go:56] fixHost completed within 7.946304367s
	I1025 21:43:41.218153  198699 start.go:83] releasing machines lock for "stopped-upgrade-893609", held for 7.946359508s
	I1025 21:43:41.218213  198699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-893609
	I1025 21:43:41.234700  198699 ssh_runner.go:195] Run: cat /version.json
	I1025 21:43:41.234755  198699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-893609
	I1025 21:43:41.234783  198699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:43:41.234844  198699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-893609
	I1025 21:43:41.252759  198699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/stopped-upgrade-893609/id_rsa Username:docker}
	I1025 21:43:41.257445  198699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/stopped-upgrade-893609/id_rsa Username:docker}
	W1025 21:43:41.329289  198699 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1025 21:43:41.329356  198699 ssh_runner.go:195] Run: systemctl --version
	I1025 21:43:41.374771  198699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 21:43:41.431197  198699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 21:43:41.435097  198699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:43:41.449615  198699 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1025 21:43:41.449692  198699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:43:41.470894  198699 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 21:43:41.470916  198699 start.go:472] detecting cgroup driver to use...
	I1025 21:43:41.470950  198699 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 21:43:41.471015  198699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 21:43:41.492066  198699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:43:41.500313  198699 docker.go:198] disabling cri-docker service (if available) ...
	I1025 21:43:41.500359  198699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:43:41.509567  198699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:43:41.518339  198699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1025 21:43:41.526383  198699 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1025 21:43:41.526426  198699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:43:41.591385  198699 docker.go:214] disabling docker service ...
	I1025 21:43:41.591445  198699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:43:41.600609  198699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:43:41.609473  198699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:43:41.679814  198699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:43:41.746432  198699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:43:41.755451  198699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:43:41.767598  198699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1025 21:43:41.767670  198699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:43:41.777111  198699 out.go:177] 
	W1025 21:43:41.778542  198699 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1025 21:43:41.778562  198699 out.go:239] * 
	* 
	W1025 21:43:41.779512  198699 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 21:43:41.781052  198699 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-893609 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (77.29s)

                                                
                                    

Test pass (278/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.64
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 5.18
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.2
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
18 TestDownloadOnlyKic 1.27
19 TestBinaryMirror 0.74
20 TestOffline 58.19
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
25 TestAddons/Setup 127.91
27 TestAddons/parallel/Registry 13.76
29 TestAddons/parallel/InspektorGadget 10.64
30 TestAddons/parallel/MetricsServer 5.61
31 TestAddons/parallel/HelmTiller 8.48
33 TestAddons/parallel/CSI 100.93
35 TestAddons/parallel/CloudSpanner 5.59
36 TestAddons/parallel/LocalPath 51.5
37 TestAddons/parallel/NvidiaDevicePlugin 5.47
40 TestAddons/serial/GCPAuth/Namespaces 0.11
41 TestAddons/StoppedEnableDisable 12.16
42 TestCertOptions 28.86
43 TestCertExpiration 237.56
45 TestForceSystemdFlag 29.24
46 TestForceSystemdEnv 43.81
48 TestKVMDriverInstallOrUpdate 1.41
52 TestErrorSpam/setup 21.06
53 TestErrorSpam/start 0.62
54 TestErrorSpam/status 0.87
55 TestErrorSpam/pause 1.48
56 TestErrorSpam/unpause 1.48
57 TestErrorSpam/stop 1.4
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 69
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 44.72
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.07
68 TestFunctional/serial/CacheCmd/cache/add_remote 4.41
69 TestFunctional/serial/CacheCmd/cache/add_local 0.84
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
77 TestFunctional/serial/ExtraConfig 31.99
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 1.32
80 TestFunctional/serial/LogsFileCmd 1.34
81 TestFunctional/serial/InvalidService 4.04
83 TestFunctional/parallel/ConfigCmd 0.42
84 TestFunctional/parallel/DashboardCmd 7.94
85 TestFunctional/parallel/DryRun 0.45
86 TestFunctional/parallel/InternationalLanguage 0.19
87 TestFunctional/parallel/StatusCmd 1.22
91 TestFunctional/parallel/ServiceCmdConnect 8.67
92 TestFunctional/parallel/AddonsCmd 0.21
93 TestFunctional/parallel/PersistentVolumeClaim 28.72
95 TestFunctional/parallel/SSHCmd 0.79
96 TestFunctional/parallel/CpCmd 1.09
97 TestFunctional/parallel/MySQL 20.44
98 TestFunctional/parallel/FileSync 0.32
99 TestFunctional/parallel/CertSync 1.65
103 TestFunctional/parallel/NodeLabels 0.08
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
107 TestFunctional/parallel/License 0.17
108 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
110 TestFunctional/parallel/ProfileCmd/profile_list 0.38
111 TestFunctional/parallel/MountCmd/any-port 7.4
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
113 TestFunctional/parallel/MountCmd/specific-port 2.04
114 TestFunctional/parallel/MountCmd/VerifyCleanup 2.27
115 TestFunctional/parallel/ServiceCmd/List 0.61
116 TestFunctional/parallel/ServiceCmd/JSONOutput 0.7
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.72
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.41
123 TestFunctional/parallel/ServiceCmd/Format 0.6
124 TestFunctional/parallel/ServiceCmd/URL 0.37
125 TestFunctional/parallel/Version/short 0.06
126 TestFunctional/parallel/Version/components 0.49
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
131 TestFunctional/parallel/ImageCommands/ImageBuild 2.07
132 TestFunctional/parallel/ImageCommands/Setup 0.94
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.27
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.89
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 11.22
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.74
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.19
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.9
149 TestFunctional/delete_addon-resizer_images 0.07
150 TestFunctional/delete_my-image_image 0.01
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 63.84
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.78
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
162 TestJSONOutput/start/Command 69.59
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.66
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.59
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.73
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.22
187 TestKicCustomNetwork/create_custom_network 31.42
188 TestKicCustomNetwork/use_default_bridge_network 25.79
189 TestKicExistingNetwork 27.03
190 TestKicCustomSubnet 26.87
191 TestKicStaticIP 26.87
192 TestMainNoArgs 0.06
193 TestMinikubeProfile 49.75
196 TestMountStart/serial/StartWithMountFirst 5.2
197 TestMountStart/serial/VerifyMountFirst 0.25
198 TestMountStart/serial/StartWithMountSecond 5.16
199 TestMountStart/serial/VerifyMountSecond 0.25
200 TestMountStart/serial/DeleteFirst 1.61
201 TestMountStart/serial/VerifyMountPostDelete 0.25
202 TestMountStart/serial/Stop 1.21
203 TestMountStart/serial/RestartStopped 6.86
204 TestMountStart/serial/VerifyMountPostStop 0.25
207 TestMultiNode/serial/FreshStart2Nodes 83.46
208 TestMultiNode/serial/DeployApp2Nodes 3.56
210 TestMultiNode/serial/AddNode 49.14
211 TestMultiNode/serial/ProfileList 0.28
212 TestMultiNode/serial/CopyFile 9.02
213 TestMultiNode/serial/StopNode 2.09
214 TestMultiNode/serial/StartAfterStop 10.59
215 TestMultiNode/serial/RestartKeepsNodes 111.35
216 TestMultiNode/serial/DeleteNode 4.64
217 TestMultiNode/serial/StopMultiNode 23.85
218 TestMultiNode/serial/RestartMultiNode 73.12
219 TestMultiNode/serial/ValidateNameConflict 23.6
224 TestPreload 144.15
226 TestScheduledStopUnix 101
229 TestInsufficientStorage 13.04
232 TestKubernetesUpgrade 101.61
233 TestMissingContainerUpgrade 157.29
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
236 TestNoKubernetes/serial/StartWithK8s 35.55
237 TestNoKubernetes/serial/StartWithStopK8s 19.52
238 TestNoKubernetes/serial/Start 11.28
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
240 TestNoKubernetes/serial/ProfileList 1.2
241 TestNoKubernetes/serial/Stop 1.23
242 TestNoKubernetes/serial/StartNoArgs 9.32
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
251 TestNetworkPlugins/group/false 4.31
255 TestStoppedBinaryUpgrade/Setup 0.49
265 TestPause/serial/Start 48.02
266 TestStoppedBinaryUpgrade/MinikubeLogs 0.56
267 TestNetworkPlugins/group/auto/Start 41.35
268 TestPause/serial/SecondStartNoReconfiguration 38.34
269 TestNetworkPlugins/group/auto/KubeletFlags 0.4
270 TestNetworkPlugins/group/auto/NetCatPod 9.61
271 TestNetworkPlugins/group/auto/DNS 0.15
272 TestNetworkPlugins/group/auto/Localhost 0.15
273 TestNetworkPlugins/group/auto/HairPin 0.15
274 TestPause/serial/Pause 0.77
275 TestPause/serial/VerifyStatus 0.4
276 TestPause/serial/Unpause 0.75
277 TestNetworkPlugins/group/kindnet/Start 39.23
278 TestPause/serial/PauseAgain 1.02
279 TestPause/serial/DeletePaused 4.98
280 TestNetworkPlugins/group/calico/Start 65.74
281 TestPause/serial/VerifyDeletedResources 14.79
282 TestNetworkPlugins/group/custom-flannel/Start 55.26
283 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
284 TestNetworkPlugins/group/enable-default-cni/Start 41.29
285 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
286 TestNetworkPlugins/group/kindnet/NetCatPod 13.29
287 TestNetworkPlugins/group/kindnet/DNS 0.19
288 TestNetworkPlugins/group/kindnet/Localhost 0.19
289 TestNetworkPlugins/group/kindnet/HairPin 0.17
290 TestNetworkPlugins/group/calico/ControllerPod 5.02
291 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
292 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
293 TestNetworkPlugins/group/calico/KubeletFlags 0.31
294 TestNetworkPlugins/group/calico/NetCatPod 11.34
295 TestNetworkPlugins/group/flannel/Start 59.82
296 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
297 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.39
298 TestNetworkPlugins/group/custom-flannel/DNS 0.2
299 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
300 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
301 TestNetworkPlugins/group/calico/DNS 0.18
302 TestNetworkPlugins/group/calico/Localhost 0.13
303 TestNetworkPlugins/group/calico/HairPin 0.13
304 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
305 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
306 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
307 TestNetworkPlugins/group/bridge/Start 39.84
309 TestStartStop/group/old-k8s-version/serial/FirstStart 134.81
311 TestStartStop/group/no-preload/serial/FirstStart 53.67
312 TestNetworkPlugins/group/flannel/ControllerPod 5.02
313 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
314 TestNetworkPlugins/group/flannel/NetCatPod 10.25
315 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
316 TestNetworkPlugins/group/bridge/NetCatPod 9.29
317 TestNetworkPlugins/group/flannel/DNS 0.18
318 TestNetworkPlugins/group/flannel/Localhost 0.14
319 TestNetworkPlugins/group/flannel/HairPin 0.14
320 TestNetworkPlugins/group/bridge/DNS 32.11
321 TestStartStop/group/no-preload/serial/DeployApp 9.36
323 TestStartStop/group/embed-certs/serial/FirstStart 39.95
324 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
325 TestStartStop/group/no-preload/serial/Stop 12.01
326 TestNetworkPlugins/group/bridge/Localhost 0.15
327 TestNetworkPlugins/group/bridge/HairPin 0.14
328 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
329 TestStartStop/group/no-preload/serial/SecondStart 338.75
331 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.56
332 TestStartStop/group/embed-certs/serial/DeployApp 8.88
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
334 TestStartStop/group/embed-certs/serial/Stop 11.92
335 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
336 TestStartStop/group/embed-certs/serial/SecondStart 335.31
337 TestStartStop/group/old-k8s-version/serial/DeployApp 8.38
338 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
339 TestStartStop/group/old-k8s-version/serial/Stop 11.99
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
341 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
342 TestStartStop/group/old-k8s-version/serial/SecondStart 417.13
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.54
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 342.93
347 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.02
348 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
349 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.35
350 TestStartStop/group/no-preload/serial/Pause 2.89
352 TestStartStop/group/newest-cni/serial/FirstStart 35.51
353 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.02
354 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
355 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
356 TestStartStop/group/embed-certs/serial/Pause 3.38
357 TestStartStop/group/newest-cni/serial/DeployApp 0
358 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.21
359 TestStartStop/group/newest-cni/serial/Stop 1.28
360 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
361 TestStartStop/group/newest-cni/serial/SecondStart 25.94
362 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
365 TestStartStop/group/newest-cni/serial/Pause 3.06
366 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.02
367 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
368 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
369 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.58
370 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
371 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
372 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
373 TestStartStop/group/old-k8s-version/serial/Pause 2.56
x
+
TestDownloadOnly/v1.16.0/json-events (7.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-868023 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-868023 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.643897396s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-868023
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-868023: exit status 85 (75.806821ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-868023 | jenkins | v1.31.2 | 25 Oct 23 21:10 UTC |          |
	|         | -p download-only-868023        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:10:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:10:54.539797   18335 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:10:54.539894   18335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:10:54.539898   18335 out.go:309] Setting ErrFile to fd 2...
	I1025 21:10:54.539903   18335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:10:54.540088   18335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	W1025 21:10:54.540199   18335 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17488-11542/.minikube/config/config.json: open /home/jenkins/minikube-integration/17488-11542/.minikube/config/config.json: no such file or directory
	I1025 21:10:54.540754   18335 out.go:303] Setting JSON to true
	I1025 21:10:54.541610   18335 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3204,"bootTime":1698265051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:10:54.541669   18335 start.go:138] virtualization: kvm guest
	I1025 21:10:54.544232   18335 out.go:97] [download-only-868023] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:10:54.545890   18335 out.go:169] MINIKUBE_LOCATION=17488
	W1025 21:10:54.544341   18335 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 21:10:54.544353   18335 notify.go:220] Checking for updates...
	I1025 21:10:54.549080   18335 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:10:54.550578   18335 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:10:54.552066   18335 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:10:54.553596   18335 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 21:10:54.556654   18335 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 21:10:54.556851   18335 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:10:54.577080   18335 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:10:54.577167   18335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:10:54.931505   18335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-25 21:10:54.922826692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:10:54.931602   18335 docker.go:295] overlay module found
	I1025 21:10:54.933716   18335 out.go:97] Using the docker driver based on user configuration
	I1025 21:10:54.933735   18335 start.go:298] selected driver: docker
	I1025 21:10:54.933741   18335 start.go:902] validating driver "docker" against <nil>
	I1025 21:10:54.933816   18335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:10:54.981907   18335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-25 21:10:54.973924464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:10:54.982076   18335 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 21:10:54.982564   18335 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1025 21:10:54.982720   18335 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 21:10:54.984585   18335 out.go:169] Using Docker driver with root privileges
	I1025 21:10:54.986115   18335 cni.go:84] Creating CNI manager for ""
	I1025 21:10:54.986134   18335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:10:54.986145   18335 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 21:10:54.986156   18335 start_flags.go:323] config:
	{Name:download-only-868023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-868023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:10:54.987805   18335 out.go:97] Starting control plane node download-only-868023 in cluster download-only-868023
	I1025 21:10:54.987822   18335 cache.go:121] Beginning downloading kic base image for docker with crio
	I1025 21:10:54.989088   18335 out.go:97] Pulling base image ...
	I1025 21:10:54.989118   18335 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1025 21:10:54.989217   18335 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:10:55.003949   18335 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 21:10:55.004117   18335 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 21:10:55.004201   18335 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 21:10:55.057102   18335 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1025 21:10:55.057136   18335 cache.go:56] Caching tarball of preloaded images
	I1025 21:10:55.057271   18335 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1025 21:10:55.059392   18335 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1025 21:10:55.059415   18335 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1025 21:10:55.082736   18335 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1025 21:10:57.992262   18335 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1025 21:10:58.887571   18335 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1025 21:10:58.887664   18335 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-868023"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (5.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-868023 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-868023 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.176296737s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (5.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-868023
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-868023: exit status 85 (71.605257ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-868023 | jenkins | v1.31.2 | 25 Oct 23 21:10 UTC |          |
	|         | -p download-only-868023        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-868023 | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |          |
	|         | -p download-only-868023        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:11:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:11:02.261475   18478 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:11:02.261705   18478 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:11:02.261714   18478 out.go:309] Setting ErrFile to fd 2...
	I1025 21:11:02.261719   18478 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:11:02.261901   18478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	W1025 21:11:02.262014   18478 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17488-11542/.minikube/config/config.json: open /home/jenkins/minikube-integration/17488-11542/.minikube/config/config.json: no such file or directory
	I1025 21:11:02.262428   18478 out.go:303] Setting JSON to true
	I1025 21:11:02.263209   18478 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3211,"bootTime":1698265051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:11:02.263268   18478 start.go:138] virtualization: kvm guest
	I1025 21:11:02.265283   18478 out.go:97] [download-only-868023] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:11:02.266968   18478 out.go:169] MINIKUBE_LOCATION=17488
	I1025 21:11:02.265451   18478 notify.go:220] Checking for updates...
	I1025 21:11:02.269987   18478 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:11:02.271525   18478 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:11:02.273009   18478 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:11:02.274433   18478 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 21:11:02.277407   18478 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 21:11:02.278014   18478 config.go:182] Loaded profile config "download-only-868023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1025 21:11:02.278077   18478 start.go:810] api.Load failed for download-only-868023: filestore "download-only-868023": Docker machine "download-only-868023" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 21:11:02.278181   18478 driver.go:378] Setting default libvirt URI to qemu:///system
	W1025 21:11:02.278233   18478 start.go:810] api.Load failed for download-only-868023: filestore "download-only-868023": Docker machine "download-only-868023" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 21:11:02.301103   18478 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:11:02.301172   18478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:11:02.350667   18478 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2023-10-25 21:11:02.342642854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:11:02.350788   18478 docker.go:295] overlay module found
	I1025 21:11:02.352725   18478 out.go:97] Using the docker driver based on existing profile
	I1025 21:11:02.352747   18478 start.go:298] selected driver: docker
	I1025 21:11:02.352754   18478 start.go:902] validating driver "docker" against &{Name:download-only-868023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-868023 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:11:02.352920   18478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:11:02.401723   18478 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2023-10-25 21:11:02.394155604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:11:02.402408   18478 cni.go:84] Creating CNI manager for ""
	I1025 21:11:02.402433   18478 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 21:11:02.402450   18478 start_flags.go:323] config:
	{Name:download-only-868023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-868023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1025 21:11:02.404597   18478 out.go:97] Starting control plane node download-only-868023 in cluster download-only-868023
	I1025 21:11:02.404627   18478 cache.go:121] Beginning downloading kic base image for docker with crio
	I1025 21:11:02.406199   18478 out.go:97] Pulling base image ...
	I1025 21:11:02.406227   18478 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:11:02.406332   18478 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:11:02.421180   18478 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 21:11:02.421302   18478 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 21:11:02.421318   18478 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1025 21:11:02.421322   18478 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1025 21:11:02.421330   18478 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1025 21:11:02.453878   18478 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1025 21:11:02.453902   18478 cache.go:56] Caching tarball of preloaded images
	I1025 21:11:02.454033   18478 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:11:02.456012   18478 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1025 21:11:02.456033   18478 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1025 21:11:02.481111   18478 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1025 21:11:05.836922   18478 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1025 21:11:05.837020   18478 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17488-11542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1025 21:11:06.768499   18478 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1025 21:11:06.768612   18478 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/download-only-868023/config.json ...
	I1025 21:11:06.768791   18478 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 21:11:06.768968   18478 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17488-11542/.minikube/cache/linux/amd64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-868023"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-868023
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.27s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-264376 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-264376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-264376
--- PASS: TestDownloadOnlyKic (1.27s)

                                                
                                    
x
+
TestBinaryMirror (0.74s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-856759 --alsologtostderr --binary-mirror http://127.0.0.1:40837 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-856759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-856759
--- PASS: TestBinaryMirror (0.74s)

                                                
                                    
x
+
TestOffline (58.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-379262 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-379262 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (55.755188356s)
helpers_test.go:175: Cleaning up "offline-crio-379262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-379262
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-379262: (2.43597606s)
--- PASS: TestOffline (58.19s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-276457
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-276457: exit status 85 (65.394425ms)

                                                
                                                
-- stdout --
	* Profile "addons-276457" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-276457"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-276457
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-276457: exit status 85 (64.083449ms)

                                                
                                                
-- stdout --
	* Profile "addons-276457" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-276457"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (127.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-276457 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-276457 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m7.913450813s)
--- PASS: TestAddons/Setup (127.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 11.614726ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-wzfbd" [2736623a-ce10-4cd0-9c1b-72b47c11791c] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010986415s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-757b5" [0f632bc0-5dac-4262-9ef7-eefd90d3e1e0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010557615s
addons_test.go:339: (dbg) Run:  kubectl --context addons-276457 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-276457 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-276457 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.91823662s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 ip
2023/10/25 21:13:31 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-q92nv" [1aa513a3-d6c3-4a53-96be-8bafaf61556c] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.016617491s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-276457
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-276457: (5.624567863s)
--- PASS: TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 2.793003ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-npx6l" [2269dbab-85e9-49c1-a14c-dc3b4c9b6219] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009418348s
addons_test.go:414: (dbg) Run:  kubectl --context addons-276457 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.61s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.48s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 9.148036ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-n7rpr" [136d2d8d-36a3-4072-9f39-dc7708f0c429] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01104086s
addons_test.go:472: (dbg) Run:  kubectl --context addons-276457 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-276457 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.979282844s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (100.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 11.780203ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-276457 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-276457 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [28a775b7-2040-4df4-9b3a-cae44677b4ef] Pending
helpers_test.go:344: "task-pv-pod" [28a775b7-2040-4df4-9b3a-cae44677b4ef] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [28a775b7-2040-4df4-9b3a-cae44677b4ef] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.008457838s
addons_test.go:583: (dbg) Run:  kubectl --context addons-276457 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-276457 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-276457 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-276457 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-276457 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-276457 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-276457 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-276457 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d12a91fa-38fe-4860-9cc7-d501f764a771] Pending
helpers_test.go:344: "task-pv-pod-restore" [d12a91fa-38fe-4860-9cc7-d501f764a771] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d12a91fa-38fe-4860-9cc7-d501f764a771] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008156089s
addons_test.go:625: (dbg) Run:  kubectl --context addons-276457 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-276457 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-276457 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-276457 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.497523037s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (100.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-5h6wg" [9a49db78-b481-4deb-ab50-228a4e85728c] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007252335s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-276457
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-276457 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-276457 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-276457 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0efbc26f-6f49-417b-bc00-555152dabeef] Pending
helpers_test.go:344: "test-local-path" [0efbc26f-6f49-417b-bc00-555152dabeef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0efbc26f-6f49-417b-bc00-555152dabeef] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0efbc26f-6f49-417b-bc00-555152dabeef] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.008603761s
addons_test.go:890: (dbg) Run:  kubectl --context addons-276457 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 ssh "cat /opt/local-path-provisioner/pvc-b62d5b0e-4bb9-43b8-94d3-0062132da2ef_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-276457 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-276457 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-276457 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-276457 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.049459731s)
--- PASS: TestAddons/parallel/LocalPath (51.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6vcl4" [a592e92f-1bee-4d45-b641-bcd64d215d00] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.011622941s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-276457
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-276457 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-276457 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-276457
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-276457: (11.889210559s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-276457
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-276457
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-276457
--- PASS: TestAddons/StoppedEnableDisable (12.16s)

                                                
                                    
x
+
TestCertOptions (28.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-315527 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-315527 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.816366486s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-315527 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-315527 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-315527 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-315527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-315527
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-315527: (1.955235597s)
--- PASS: TestCertOptions (28.86s)

                                                
                                    
x
+
TestCertExpiration (237.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-909981 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-909981 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (28.144676713s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-909981 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-909981 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (27.313333652s)
helpers_test.go:175: Cleaning up "cert-expiration-909981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-909981
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-909981: (2.103209457s)
--- PASS: TestCertExpiration (237.56s)

                                                
                                    
x
+
TestForceSystemdFlag (29.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-578833 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-578833 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.578935924s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-578833 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-578833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-578833
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-578833: (2.334173802s)
--- PASS: TestForceSystemdFlag (29.24s)

                                                
                                    
x
+
TestForceSystemdEnv (43.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-586200 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-586200 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.681372372s)
helpers_test.go:175: Cleaning up "force-systemd-env-586200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-586200
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-586200: (3.131361186s)
--- PASS: TestForceSystemdEnv (43.81s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.41s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.41s)

                                                
                                    
x
+
TestErrorSpam/setup (21.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-118699 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-118699 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-118699 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-118699 --driver=docker  --container-runtime=crio: (21.055891814s)
--- PASS: TestErrorSpam/setup (21.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 stop: (1.201110404s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-118699 --log_dir /tmp/nospam-118699 stop
--- PASS: TestErrorSpam/stop (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17488-11542/.minikube/files/etc/test/nested/copy/18323/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-947891 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-947891 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.001660492s)
--- PASS: TestFunctional/serial/StartWithProxy (69.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.72s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-947891 --alsologtostderr -v=8
E1025 21:18:17.917766   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:17.923441   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:17.933662   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:17.953933   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:17.994169   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:18.074473   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:18.234896   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:18.555428   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:19.196359   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:20.476862   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:23.037035   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:28.157684   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:38.397938   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:18:58.879095   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-947891 --alsologtostderr -v=8: (44.724054715s)
functional_test.go:659: soft start took 44.724741256s for "functional-947891" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.72s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-947891 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-947891 cache add registry.k8s.io/pause:3.3: (1.008765111s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-947891 cache add registry.k8s.io/pause:latest: (2.535050742s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-947891 /tmp/TestFunctionalserialCacheCmdcacheadd_local16612593/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 cache add minikube-local-cache-test:functional-947891
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 cache delete minikube-local-cache-test:functional-947891
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-947891
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-947891 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (274.85048ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 kubectl -- --context functional-947891 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-947891 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-947891 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 21:19:39.840744   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-947891 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.98563159s)
functional_test.go:757: restart took 31.985745179s for "functional-947891" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-947891 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-947891 logs: (1.316676633s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 logs --file /tmp/TestFunctionalserialLogsFileCmd2285485388/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-947891 logs --file /tmp/TestFunctionalserialLogsFileCmd2285485388/001/logs.txt: (1.338960094s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-947891 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-947891
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-947891: exit status 115 (319.944357ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30625 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-947891 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-947891 config get cpus: exit status 14 (99.928851ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-947891 config get cpus: exit status 14 (61.405048ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-947891 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-947891 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 52582: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-947891 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-947891 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (187.205271ms)

                                                
                                                
-- stdout --
	* [functional-947891] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:19:49.813771   52105 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:19:49.813905   52105 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:19:49.813913   52105 out.go:309] Setting ErrFile to fd 2...
	I1025 21:19:49.813918   52105 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:19:49.814115   52105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:19:49.814634   52105 out.go:303] Setting JSON to false
	I1025 21:19:49.815612   52105 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3739,"bootTime":1698265051,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:19:49.815674   52105 start.go:138] virtualization: kvm guest
	I1025 21:19:49.818537   52105 out.go:177] * [functional-947891] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:19:49.820708   52105 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:19:49.822176   52105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:19:49.820722   52105 notify.go:220] Checking for updates...
	I1025 21:19:49.825227   52105 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:19:49.826774   52105 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:19:49.828552   52105 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:19:49.831872   52105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:19:49.835405   52105 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:19:49.836054   52105 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:19:49.866802   52105 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:19:49.866902   52105 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:19:49.923321   52105 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-25 21:19:49.915180263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:19:49.923418   52105 docker.go:295] overlay module found
	I1025 21:19:49.925799   52105 out.go:177] * Using the docker driver based on existing profile
	I1025 21:19:49.927656   52105 start.go:298] selected driver: docker
	I1025 21:19:49.927683   52105 start.go:902] validating driver "docker" against &{Name:functional-947891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-947891 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:19:49.927796   52105 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:19:49.930633   52105 out.go:177] 
	W1025 21:19:49.932328   52105 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 21:19:49.933929   52105 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-947891 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-947891 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-947891 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (186.902926ms)

                                                
                                                
-- stdout --
	* [functional-947891] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:19:49.629501   51997 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:19:49.629842   51997 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:19:49.629860   51997 out.go:309] Setting ErrFile to fd 2...
	I1025 21:19:49.629868   51997 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:19:49.630271   51997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:19:49.631044   51997 out.go:303] Setting JSON to false
	I1025 21:19:49.632431   51997 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3739,"bootTime":1698265051,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:19:49.632524   51997 start.go:138] virtualization: kvm guest
	I1025 21:19:49.635278   51997 out.go:177] * [functional-947891] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1025 21:19:49.637305   51997 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:19:49.638614   51997 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:19:49.637340   51997 notify.go:220] Checking for updates...
	I1025 21:19:49.640294   51997 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:19:49.642147   51997 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:19:49.643659   51997 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:19:49.645091   51997 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:19:49.647044   51997 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:19:49.647678   51997 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:19:49.679397   51997 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:19:49.679505   51997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:19:49.735394   51997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-25 21:19:49.724647545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:19:49.735506   51997 docker.go:295] overlay module found
	I1025 21:19:49.737770   51997 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1025 21:19:49.739274   51997 start.go:298] selected driver: docker
	I1025 21:19:49.739291   51997 start.go:902] validating driver "docker" against &{Name:functional-947891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-947891 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:19:49.739416   51997 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:19:49.742189   51997 out.go:177] 
	W1025 21:19:49.743956   51997 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 21:19:49.745722   51997 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-947891 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-947891 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-6wvwt" [bcc4ed4a-c540-4038-8ffb-7d3d740987f5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-6wvwt" [bcc4ed4a-c540-4038-8ffb-7d3d740987f5] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.065312971s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30408
functional_test.go:1674: http://192.168.49.2:30408: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-6wvwt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30408
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9b95cf7a-5c47-40a0-8fa0-f9abd2869ef0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.019139556s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-947891 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-947891 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-947891 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-947891 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9ebec9a9-724f-45ac-ae88-db0c44594095] Pending
helpers_test.go:344: "sp-pod" [9ebec9a9-724f-45ac-ae88-db0c44594095] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9ebec9a9-724f-45ac-ae88-db0c44594095] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.010764082s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-947891 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-947891 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-947891 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dba8e930-fc8e-4478-a693-757b233825c7] Pending
helpers_test.go:344: "sp-pod" [dba8e930-fc8e-4478-a693-757b233825c7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.045972154s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-947891 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh -n functional-947891 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 cp functional-947891:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1010010887/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh -n functional-947891 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-947891 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-m7bxw" [d076c094-e60e-4375-b78e-1f4f359933d6] Pending
helpers_test.go:344: "mysql-859648c796-m7bxw" [d076c094-e60e-4375-b78e-1f4f359933d6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-m7bxw" [d076c094-e60e-4375-b78e-1f4f359933d6] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.012687869s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-947891 exec mysql-859648c796-m7bxw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-947891 exec mysql-859648c796-m7bxw -- mysql -ppassword -e "show databases;": exit status 1 (133.578306ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-947891 exec mysql-859648c796-m7bxw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-947891 exec mysql-859648c796-m7bxw -- mysql -ppassword -e "show databases;": exit status 1 (168.40887ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-947891 exec mysql-859648c796-m7bxw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/18323/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo cat /etc/test/nested/copy/18323/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/18323.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo cat /etc/ssl/certs/18323.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/18323.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo cat /usr/share/ca-certificates/18323.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/183232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo cat /etc/ssl/certs/183232.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/183232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo cat /usr/share/ca-certificates/183232.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-947891 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-947891 ssh "sudo systemctl is-active docker": exit status 1 (272.187353ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-947891 ssh "sudo systemctl is-active containerd": exit status 1 (279.557385ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-947891 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-947891 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-xdvmw" [41607862-f1a8-4799-be8d-1ba45d08e5d0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-xdvmw" [41607862-f1a8-4799-be8d-1ba45d08e5d0] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.040167129s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "293.104679ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "88.040677ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdany-port4170638365/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698268787903605391" to /tmp/TestFunctionalparallelMountCmdany-port4170638365/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698268787903605391" to /tmp/TestFunctionalparallelMountCmdany-port4170638365/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698268787903605391" to /tmp/TestFunctionalparallelMountCmdany-port4170638365/001/test-1698268787903605391
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.530782ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 21:19 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 21:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 21:19 test-1698268787903605391
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh cat /mount-9p/test-1698268787903605391
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-947891 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [384fef01-f44a-42ad-a865-05cbc94d929f] Pending
helpers_test.go:344: "busybox-mount" [384fef01-f44a-42ad-a865-05cbc94d929f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [384fef01-f44a-42ad-a865-05cbc94d929f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [384fef01-f44a-42ad-a865-05cbc94d929f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.018510074s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-947891 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdany-port4170638365/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "308.320574ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "61.092215ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdspecific-port1805514589/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.806346ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdspecific-port1805514589/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-947891 ssh "sudo umount -f /mount-9p": exit status 1 (269.614187ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-947891 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdspecific-port1805514589/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324999364/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324999364/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324999364/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T" /mount1: exit status 1 (338.844213ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2023/10/25 21:19:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-947891 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324999364/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324999364/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-947891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324999364/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 service list -o json
functional_test.go:1493: Took "699.788702ms" to run "out/minikube-linux-amd64 -p functional-947891 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-947891 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-947891 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-947891 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 54421: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-947891 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32542
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-947891 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-947891 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5bebd56f-0174-4120-a7c0-852025ac77c4] Pending
helpers_test.go:344: "nginx-svc" [5bebd56f-0174-4120-a7c0-852025ac77c4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5bebd56f-0174-4120-a7c0-852025ac77c4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.018703386s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32542
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-947891 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-947891
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-947891 image ls --format short --alsologtostderr:
I1025 21:20:25.237496   58361 out.go:296] Setting OutFile to fd 1 ...
I1025 21:20:25.237659   58361 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:20:25.237671   58361 out.go:309] Setting ErrFile to fd 2...
I1025 21:20:25.237679   58361 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:20:25.237990   58361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
I1025 21:20:25.241078   58361 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1025 21:20:25.241430   58361 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1025 21:20:25.242066   58361 cli_runner.go:164] Run: docker container inspect functional-947891 --format={{.State.Status}}
I1025 21:20:25.262948   58361 ssh_runner.go:195] Run: systemctl --version
I1025 21:20:25.263010   58361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-947891
I1025 21:20:25.286499   58361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/functional-947891/id_rsa Username:docker}
I1025 21:20:25.370276   58361 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-947891 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 5374347291230 | 127MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 10baa1ca17068 | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.3            | bfc896cf80fba | 74.7MB |
| docker.io/library/nginx                 | latest             | 593aee2afb642 | 191MB  |
| docker.io/library/nginx                 | alpine             | b135667c98980 | 49.5MB |
| gcr.io/google-containers/addon-resizer  | functional-947891  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 6d1b4fd1b182d | 61.5MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 3b85be0b10d38 | 601MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-947891 image ls --format table --alsologtostderr:
I1025 21:20:25.511637   58523 out.go:296] Setting OutFile to fd 1 ...
I1025 21:20:25.511754   58523 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:20:25.511763   58523 out.go:309] Setting ErrFile to fd 2...
I1025 21:20:25.511768   58523 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:20:25.511989   58523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
I1025 21:20:25.512563   58523 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1025 21:20:25.512663   58523 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1025 21:20:25.513059   58523 cli_runner.go:164] Run: docker container inspect functional-947891 --format={{.State.Status}}
I1025 21:20:25.530972   58523 ssh_runner.go:195] Run: systemctl --version
I1025 21:20:25.531026   58523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-947891
I1025 21:20:25.550174   58523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/functional-947891/id_rsa Username:docker}
I1025 21:20:25.638381   58523 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-947891 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etc
d:3.5.9-0"],"size":"295456551"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"127165392"},{"id":"3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4","repoDigests":["docker.io/library/mysql@sha256:188121394576d05aedb5daf229403bf58d4ee16e04e81828e4d43b72bd227bc2","docker.io/library/mysql@sha256:4f9bfb0f7dd97739ceedb546b381534bb11e9b4abf013d6ad9ae6473fed66099"],"repoTags":["docker.io/library/mysql:5.7"],"size":"600824773"},{"id":"593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0","repoDigests":["docker.io/library/nginx@sha256:0d60ba9498d4491525334696a736b4c19b56231b972061fab2f536d48ebfd7ce","docker.io/library/nginx@sha256:add4792d930c25dd2abf2ef9ea79de578097a1c175a16ab258143
32fe33622de"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"123188534"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"74691991"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a
89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"61498678"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry
.k8s.io/pause:3.1"],"size":"746911"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e","repoDigests":["docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d","docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e6333
2c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"49538855"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-947891"],"size":"34114467"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"01
84c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-947891 image ls --format json --alsologtostderr:
I1025 21:20:25.508286   58522 out.go:296] Setting OutFile to fd 1 ...
I1025 21:20:25.508433   58522 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:20:25.508443   58522 out.go:309] Setting ErrFile to fd 2...
I1025 21:20:25.508450   58522 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:20:25.508664   58522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
I1025 21:20:25.509218   58522 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1025 21:20:25.509337   58522 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1025 21:20:25.509745   58522 cli_runner.go:164] Run: docker container inspect functional-947891 --format={{.State.Status}}
I1025 21:20:25.526680   58522 ssh_runner.go:195] Run: systemctl --version
I1025 21:20:25.526727   58522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-947891
I1025 21:20:25.543576   58522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/functional-947891/id_rsa Username:docker}
I1025 21:20:25.630849   58522 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-947891 image ls --format yaml --alsologtostderr:
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "127165392"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "123188534"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-947891
size: "34114467"
- id: 593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0
repoDigests:
- docker.io/library/nginx@sha256:0d60ba9498d4491525334696a736b4c19b56231b972061fab2f536d48ebfd7ce
- docker.io/library/nginx@sha256:add4792d930c25dd2abf2ef9ea79de578097a1c175a16ab25814332fe33622de
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4
repoDigests:
- docker.io/library/mysql@sha256:188121394576d05aedb5daf229403bf58d4ee16e04e81828e4d43b72bd227bc2
- docker.io/library/mysql@sha256:4f9bfb0f7dd97739ceedb546b381534bb11e9b4abf013d6ad9ae6473fed66099
repoTags:
- docker.io/library/mysql:5.7
size: "600824773"
- id: b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e
repoDigests:
- docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "49538855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "74691991"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "61498678"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-947891 image ls --format yaml --alsologtostderr:
I1025 21:20:25.227632   58362 out.go:296] Setting OutFile to fd 1 ...
I1025 21:20:25.228375   58362 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:20:25.228392   58362 out.go:309] Setting ErrFile to fd 2...
I1025 21:20:25.228399   58362 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:20:25.228730   58362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
I1025 21:20:25.229586   58362 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1025 21:20:25.229737   58362 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1025 21:20:25.230354   58362 cli_runner.go:164] Run: docker container inspect functional-947891 --format={{.State.Status}}
I1025 21:20:25.252439   58362 ssh_runner.go:195] Run: systemctl --version
I1025 21:20:25.252492   58362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-947891
I1025 21:20:25.278472   58362 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/functional-947891/id_rsa Username:docker}
I1025 21:20:25.366611   58362 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-947891 ssh pgrep buildkitd: exit status 1 (283.857052ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image build -t localhost/my-image:functional-947891 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-947891 image build -t localhost/my-image:functional-947891 testdata/build --alsologtostderr: (1.574799802s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-947891 image build -t localhost/my-image:functional-947891 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e2a3410dd50
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-947891
--> b2e650f3837
Successfully tagged localhost/my-image:functional-947891
b2e650f3837cf423139515dae7cf798763eb19499796b7d35efdcb1e0b9f32bd
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-947891 image build -t localhost/my-image:functional-947891 testdata/build --alsologtostderr:
I1025 21:20:25.515199   58533 out.go:296] Setting OutFile to fd 1 ...
I1025 21:20:25.515333   58533 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:20:25.515341   58533 out.go:309] Setting ErrFile to fd 2...
I1025 21:20:25.515346   58533 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:20:25.515533   58533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
I1025 21:20:25.516086   58533 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1025 21:20:25.516616   58533 config.go:182] Loaded profile config "functional-947891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1025 21:20:25.517056   58533 cli_runner.go:164] Run: docker container inspect functional-947891 --format={{.State.Status}}
I1025 21:20:25.537455   58533 ssh_runner.go:195] Run: systemctl --version
I1025 21:20:25.537526   58533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-947891
I1025 21:20:25.556928   58533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/functional-947891/id_rsa Username:docker}
I1025 21:20:25.646185   58533 build_images.go:151] Building image from path: /tmp/build.3133377487.tar
I1025 21:20:25.646242   58533 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 21:20:25.655285   58533 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3133377487.tar
I1025 21:20:25.659215   58533 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3133377487.tar: stat -c "%s %y" /var/lib/minikube/build/build.3133377487.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3133377487.tar': No such file or directory
I1025 21:20:25.659247   58533 ssh_runner.go:362] scp /tmp/build.3133377487.tar --> /var/lib/minikube/build/build.3133377487.tar (3072 bytes)
I1025 21:20:25.731834   58533 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3133377487
I1025 21:20:25.739720   58533 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3133377487 -xf /var/lib/minikube/build/build.3133377487.tar
I1025 21:20:25.748116   58533 crio.go:297] Building image: /var/lib/minikube/build/build.3133377487
I1025 21:20:25.748174   58533 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-947891 /var/lib/minikube/build/build.3133377487 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1025 21:20:26.995202   58533 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-947891 /var/lib/minikube/build/build.3133377487 --cgroup-manager=cgroupfs: (1.247008479s)
I1025 21:20:26.995264   58533 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3133377487
I1025 21:20:27.002968   58533 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3133377487.tar
I1025 21:20:27.010063   58533 build_images.go:207] Built localhost/my-image:functional-947891 from /tmp/build.3133377487.tar
I1025 21:20:27.010084   58533 build_images.go:123] succeeded building to: functional-947891
I1025 21:20:27.010089   58533 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-947891
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image load --daemon gcr.io/google-containers/addon-resizer:functional-947891 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-947891 image load --daemon gcr.io/google-containers/addon-resizer:functional-947891 --alsologtostderr: (5.055628201s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image load --daemon gcr.io/google-containers/addon-resizer:functional-947891 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-947891 image load --daemon gcr.io/google-containers/addon-resizer:functional-947891 --alsologtostderr: (2.621256538s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-947891 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.23.213 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-947891 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.10246596s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-947891
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image load --daemon gcr.io/google-containers/addon-resizer:functional-947891 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-947891 image load --daemon gcr.io/google-containers/addon-resizer:functional-947891 --alsologtostderr: (9.870430868s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image save gcr.io/google-containers/addon-resizer:functional-947891 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image rm gcr.io/google-containers/addon-resizer:functional-947891 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-947891
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-947891 image save --daemon gcr.io/google-containers/addon-resizer:functional-947891 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-947891
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-947891
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-947891
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-947891
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (63.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-620621 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1025 21:21:01.761716   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-620621 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m3.843893216s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (63.84s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.78s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-620621 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-620621 addons enable ingress --alsologtostderr -v=5: (10.780977841s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.78s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-620621 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-895354 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1025 21:24:49.381707   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
E1025 21:24:51.943128   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
E1025 21:24:57.064113   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
E1025 21:25:07.305036   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
E1025 21:25:27.785671   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-895354 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m9.592577312s)
--- PASS: TestJSONOutput/start/Command (69.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-895354 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-895354 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-895354 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-895354 --output=json --user=testUser: (5.731471983s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-203803 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-203803 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.061452ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b7b6f829-44a6-40fb-81b8-0ec1ec34e540","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-203803] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c74fa56-a133-43b0-947d-f462bab10fd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17488"}}
	{"specversion":"1.0","id":"69726390-1c9f-4e39-9ea5-aa5b762e85a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c433c78-28a8-4d64-b18e-df56fd2a65fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig"}}
	{"specversion":"1.0","id":"fe461771-93fc-43f3-aa7d-103372aef237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube"}}
	{"specversion":"1.0","id":"61c01776-458d-418c-af2c-7dd99376ee30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"acd6ff53-61f3-41d0-9597-d51768a9f220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2e27d5e3-b55e-41bf-a0df-9d9f4a56e951","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-203803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-203803
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-215971 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-215971 --network=: (29.30586059s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-215971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-215971
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-215971: (2.098226895s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-229099 --network=bridge
E1025 21:26:50.391812   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:26:50.397104   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:26:50.407354   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:26:50.427619   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:26:50.467900   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:26:50.548240   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:26:50.708655   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:26:51.029255   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:26:51.670108   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:26:52.950265   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:26:55.510801   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:27:00.631808   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-229099 --network=bridge: (23.867089181s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-229099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-229099
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-229099: (1.90557178s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.79s)

                                                
                                    
x
+
TestKicExistingNetwork (27.03s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-167697 --network=existing-network
E1025 21:27:10.871994   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:27:30.666249   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
E1025 21:27:31.352509   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-167697 --network=existing-network: (25.076099646s)
helpers_test.go:175: Cleaning up "existing-network-167697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-167697
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-167697: (1.826723254s)
--- PASS: TestKicExistingNetwork (27.03s)

                                                
                                    
x
+
TestKicCustomSubnet (26.87s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-320309 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-320309 --subnet=192.168.60.0/24: (24.775325626s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-320309 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-320309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-320309
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-320309: (2.075410542s)
--- PASS: TestKicCustomSubnet (26.87s)

                                                
                                    
x
+
TestKicStaticIP (26.87s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-040042 --static-ip=192.168.200.200
E1025 21:28:12.312684   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:28:17.918749   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-040042 --static-ip=192.168.200.200: (24.661267149s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-040042 ip
helpers_test.go:175: Cleaning up "static-ip-040042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-040042
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-040042: (2.074530893s)
--- PASS: TestKicStaticIP (26.87s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-519050 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-519050 --driver=docker  --container-runtime=crio: (20.604559259s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-521786 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-521786 --driver=docker  --container-runtime=crio: (24.415210344s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-519050
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-521786
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-521786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-521786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-521786: (1.874332065s)
helpers_test.go:175: Cleaning up "first-519050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-519050
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-519050: (1.858074224s)
--- PASS: TestMinikubeProfile (49.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-047570 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-047570 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.199726648s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-047570 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-061439 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-061439 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.156665534s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-061439 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-047570 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-047570 --alsologtostderr -v=5: (1.612483132s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-061439 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-061439
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-061439: (1.210426211s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-061439
E1025 21:29:34.234916   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-061439: (5.864271868s)
--- PASS: TestMountStart/serial/RestartStopped (6.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-061439 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874778 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1025 21:29:46.818797   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
E1025 21:30:14.507291   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874778 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m23.022791549s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-874778 -- rollout status deployment/busybox: (1.866106385s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-2z62q -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-xh8tr -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-2z62q -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-xh8tr -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-2z62q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874778 -- exec busybox-5bc68d56bd-xh8tr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.56s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-874778 -v 3 --alsologtostderr
E1025 21:31:50.390354   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-874778 -v 3 --alsologtostderr: (48.552889376s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.14s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp testdata/cp-test.txt multinode-874778:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp multinode-874778:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile26464156/001/cp-test_multinode-874778.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp multinode-874778:/home/docker/cp-test.txt multinode-874778-m02:/home/docker/cp-test_multinode-874778_multinode-874778-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m02 "sudo cat /home/docker/cp-test_multinode-874778_multinode-874778-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp multinode-874778:/home/docker/cp-test.txt multinode-874778-m03:/home/docker/cp-test_multinode-874778_multinode-874778-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m03 "sudo cat /home/docker/cp-test_multinode-874778_multinode-874778-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp testdata/cp-test.txt multinode-874778-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp multinode-874778-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile26464156/001/cp-test_multinode-874778-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp multinode-874778-m02:/home/docker/cp-test.txt multinode-874778:/home/docker/cp-test_multinode-874778-m02_multinode-874778.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778 "sudo cat /home/docker/cp-test_multinode-874778-m02_multinode-874778.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp multinode-874778-m02:/home/docker/cp-test.txt multinode-874778-m03:/home/docker/cp-test_multinode-874778-m02_multinode-874778-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m03 "sudo cat /home/docker/cp-test_multinode-874778-m02_multinode-874778-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp testdata/cp-test.txt multinode-874778-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp multinode-874778-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile26464156/001/cp-test_multinode-874778-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp multinode-874778-m03:/home/docker/cp-test.txt multinode-874778:/home/docker/cp-test_multinode-874778-m03_multinode-874778.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778 "sudo cat /home/docker/cp-test_multinode-874778-m03_multinode-874778.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 cp multinode-874778-m03:/home/docker/cp-test.txt multinode-874778-m02:/home/docker/cp-test_multinode-874778-m03_multinode-874778-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 ssh -n multinode-874778-m02 "sudo cat /home/docker/cp-test_multinode-874778-m03_multinode-874778-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-874778 node stop m03: (1.20333054s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874778 status: exit status 7 (441.48458ms)

                                                
                                                
-- stdout --
	multinode-874778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-874778-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-874778-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874778 status --alsologtostderr: exit status 7 (448.595544ms)

                                                
                                                
-- stdout --
	multinode-874778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-874778-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-874778-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:32:10.764957  118021 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:32:10.765214  118021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:32:10.765226  118021 out.go:309] Setting ErrFile to fd 2...
	I1025 21:32:10.765231  118021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:32:10.765425  118021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:32:10.765639  118021 out.go:303] Setting JSON to false
	I1025 21:32:10.765679  118021 mustload.go:65] Loading cluster: multinode-874778
	I1025 21:32:10.765791  118021 notify.go:220] Checking for updates...
	I1025 21:32:10.766227  118021 config.go:182] Loaded profile config "multinode-874778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:32:10.766249  118021 status.go:255] checking status of multinode-874778 ...
	I1025 21:32:10.766834  118021 cli_runner.go:164] Run: docker container inspect multinode-874778 --format={{.State.Status}}
	I1025 21:32:10.783615  118021 status.go:330] multinode-874778 host status = "Running" (err=<nil>)
	I1025 21:32:10.783637  118021 host.go:66] Checking if "multinode-874778" exists ...
	I1025 21:32:10.783915  118021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-874778
	I1025 21:32:10.800139  118021 host.go:66] Checking if "multinode-874778" exists ...
	I1025 21:32:10.800399  118021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:32:10.800440  118021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778
	I1025 21:32:10.816094  118021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778/id_rsa Username:docker}
	I1025 21:32:10.903213  118021 ssh_runner.go:195] Run: systemctl --version
	I1025 21:32:10.907018  118021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:32:10.917244  118021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:32:10.967462  118021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:56 SystemTime:2023-10-25 21:32:10.959388701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:32:10.967991  118021 kubeconfig.go:92] found "multinode-874778" server: "https://192.168.58.2:8443"
	I1025 21:32:10.968013  118021 api_server.go:166] Checking apiserver status ...
	I1025 21:32:10.968080  118021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:32:10.977727  118021 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup
	I1025 21:32:10.985650  118021 api_server.go:182] apiserver freezer: "9:freezer:/docker/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/crio/crio-88d339d4a0421aaaa656ef01ba3e9c8ab5ba254b199dac17a025b044f9db4d8f"
	I1025 21:32:10.985701  118021 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0862499eed10d3d0fe339b85aed58bcf1373fd182861731b8aa1cf4b7ed35d6b/crio/crio-88d339d4a0421aaaa656ef01ba3e9c8ab5ba254b199dac17a025b044f9db4d8f/freezer.state
	I1025 21:32:10.992799  118021 api_server.go:204] freezer state: "THAWED"
	I1025 21:32:10.992822  118021 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1025 21:32:10.997080  118021 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1025 21:32:10.997098  118021 status.go:421] multinode-874778 apiserver status = Running (err=<nil>)
	I1025 21:32:10.997106  118021 status.go:257] multinode-874778 status: &{Name:multinode-874778 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 21:32:10.997126  118021 status.go:255] checking status of multinode-874778-m02 ...
	I1025 21:32:10.997331  118021 cli_runner.go:164] Run: docker container inspect multinode-874778-m02 --format={{.State.Status}}
	I1025 21:32:11.013705  118021 status.go:330] multinode-874778-m02 host status = "Running" (err=<nil>)
	I1025 21:32:11.013724  118021 host.go:66] Checking if "multinode-874778-m02" exists ...
	I1025 21:32:11.013944  118021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-874778-m02
	I1025 21:32:11.029343  118021 host.go:66] Checking if "multinode-874778-m02" exists ...
	I1025 21:32:11.029591  118021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:32:11.029637  118021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874778-m02
	I1025 21:32:11.045054  118021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17488-11542/.minikube/machines/multinode-874778-m02/id_rsa Username:docker}
	I1025 21:32:11.130815  118021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:32:11.141106  118021 status.go:257] multinode-874778-m02 status: &{Name:multinode-874778-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 21:32:11.141141  118021 status.go:255] checking status of multinode-874778-m03 ...
	I1025 21:32:11.141451  118021 cli_runner.go:164] Run: docker container inspect multinode-874778-m03 --format={{.State.Status}}
	I1025 21:32:11.156987  118021 status.go:330] multinode-874778-m03 host status = "Stopped" (err=<nil>)
	I1025 21:32:11.157007  118021 status.go:343] host is not running, skipping remaining checks
	I1025 21:32:11.157014  118021 status.go:257] multinode-874778-m03 status: &{Name:multinode-874778-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 node start m03 --alsologtostderr
E1025 21:32:18.076059   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-874778 node start m03 --alsologtostderr: (9.9307057s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-874778
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-874778
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-874778: (24.768068429s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874778 --wait=true -v=8 --alsologtostderr
E1025 21:33:17.918775   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874778 --wait=true -v=8 --alsologtostderr: (1m26.467974737s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-874778
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-874778 node delete m03: (4.076214629s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 stop
E1025 21:34:40.962331   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-874778 stop: (23.659425092s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874778 status: exit status 7 (92.734489ms)

                                                
                                                
-- stdout --
	multinode-874778
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-874778-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874778 status --alsologtostderr: exit status 7 (96.063925ms)

                                                
                                                
-- stdout --
	multinode-874778
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-874778-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:34:41.551860  127866 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:34:41.552114  127866 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:34:41.552125  127866 out.go:309] Setting ErrFile to fd 2...
	I1025 21:34:41.552132  127866 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:34:41.552337  127866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:34:41.552539  127866 out.go:303] Setting JSON to false
	I1025 21:34:41.552579  127866 mustload.go:65] Loading cluster: multinode-874778
	I1025 21:34:41.552621  127866 notify.go:220] Checking for updates...
	I1025 21:34:41.552995  127866 config.go:182] Loaded profile config "multinode-874778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:34:41.553011  127866 status.go:255] checking status of multinode-874778 ...
	I1025 21:34:41.553494  127866 cli_runner.go:164] Run: docker container inspect multinode-874778 --format={{.State.Status}}
	I1025 21:34:41.572312  127866 status.go:330] multinode-874778 host status = "Stopped" (err=<nil>)
	I1025 21:34:41.572330  127866 status.go:343] host is not running, skipping remaining checks
	I1025 21:34:41.572335  127866 status.go:257] multinode-874778 status: &{Name:multinode-874778 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 21:34:41.572359  127866 status.go:255] checking status of multinode-874778-m02 ...
	I1025 21:34:41.572584  127866 cli_runner.go:164] Run: docker container inspect multinode-874778-m02 --format={{.State.Status}}
	I1025 21:34:41.588591  127866 status.go:330] multinode-874778-m02 host status = "Stopped" (err=<nil>)
	I1025 21:34:41.588611  127866 status.go:343] host is not running, skipping remaining checks
	I1025 21:34:41.588616  127866 status.go:257] multinode-874778-m02 status: &{Name:multinode-874778-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (73.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874778 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1025 21:34:46.818566   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874778 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m12.557288774s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874778 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (73.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-874778
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874778-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-874778-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.732567ms)

                                                
                                                
-- stdout --
	* [multinode-874778-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-874778-m02' is duplicated with machine name 'multinode-874778-m02' in profile 'multinode-874778'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874778-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874778-m03 --driver=docker  --container-runtime=crio: (21.320965488s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-874778
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-874778: exit status 80 (269.58172ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-874778
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-874778-m03 already exists in multinode-874778-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-874778-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-874778-m03: (1.868444066s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.60s)

                                                
                                    
x
+
TestPreload (144.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-410954 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1025 21:36:50.390908   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-410954 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m9.206654628s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-410954 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-410954
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-410954: (5.683241245s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-410954 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1025 21:38:17.918521   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-410954 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m5.826789551s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-410954 image list
helpers_test.go:175: Cleaning up "test-preload-410954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-410954
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-410954: (2.252580332s)
--- PASS: TestPreload (144.15s)

                                                
                                    
x
+
TestScheduledStopUnix (101s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-035296 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-035296 --memory=2048 --driver=docker  --container-runtime=crio: (24.808394667s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-035296 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-035296 -n scheduled-stop-035296
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-035296 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-035296 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-035296 -n scheduled-stop-035296
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-035296
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-035296 --schedule 15s
E1025 21:39:46.819962   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-035296
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-035296: exit status 7 (78.848592ms)

                                                
                                                
-- stdout --
	scheduled-stop-035296
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-035296 -n scheduled-stop-035296
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-035296 -n scheduled-stop-035296: exit status 7 (73.594989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-035296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-035296
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-035296: (4.784507034s)
--- PASS: TestScheduledStopUnix (101.00s)

                                                
                                    
x
+
TestInsufficientStorage (13.04s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-082359 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-082359 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.648158168s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"418cf9aa-a487-4918-ab2d-fcbaefa9f848","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-082359] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f267dce9-6f79-4c15-bbbc-88e92f8e3ab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17488"}}
	{"specversion":"1.0","id":"73d9c025-bc42-41ee-a0e2-072f0c48706b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c584bdcb-44ce-427e-b16a-3047071cce86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig"}}
	{"specversion":"1.0","id":"fb85733c-8355-4b3c-85e6-c78b238f7c17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube"}}
	{"specversion":"1.0","id":"d187fdf7-7ae7-4023-aa96-3a02bf07a6e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cb297c7a-7e38-4923-926f-c39c8f3d8066","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"28ac1a22-26ca-4cf9-9cb8-a396ec717500","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c2ebdcbb-17d6-4e51-9be5-dc43c94242b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"19a44cdb-b57b-4792-bb42-f7e422bcf206","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"da30ecb7-5c32-44c5-af6b-e17dfd12259b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"360ef94d-244e-4022-afb1-f5df79edc18e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-082359 in cluster insufficient-storage-082359","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"49758cd3-0f40-4990-925d-20cc3b37a47b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d8da210-967e-4a1b-9a21-d2b338a556ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6221331a-420f-4607-a49b-71868de08a46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-082359 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-082359 --output=json --layout=cluster: exit status 7 (260.649317ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-082359","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-082359","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:40:39.959074  149694 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-082359" does not appear in /home/jenkins/minikube-integration/17488-11542/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-082359 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-082359 --output=json --layout=cluster: exit status 7 (260.169866ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-082359","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-082359","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 21:40:40.220341  149781 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-082359" does not appear in /home/jenkins/minikube-integration/17488-11542/kubeconfig
	E1025 21:40:40.229434  149781 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/insufficient-storage-082359/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-082359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-082359
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-082359: (1.871274379s)
--- PASS: TestInsufficientStorage (13.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (101.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-456885 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-456885 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.738950318s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-456885
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-456885: (1.217445142s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-456885 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-456885 status --format={{.Host}}: exit status 7 (82.447885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-456885 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-456885 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.93019064s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-456885 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-456885 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-456885 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (80.177433ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-456885] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-456885
	    minikube start -p kubernetes-upgrade-456885 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4568852 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-456885 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-456885 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-456885 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.189596157s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-456885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-456885
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-456885: (4.309089294s)
--- PASS: TestKubernetesUpgrade (101.61s)

                                                
                                    
x
+
TestMissingContainerUpgrade (157.29s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.772183558.exe start -p missing-upgrade-392851 --memory=2200 --driver=docker  --container-runtime=crio
E1025 21:41:09.868303   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.772183558.exe start -p missing-upgrade-392851 --memory=2200 --driver=docker  --container-runtime=crio: (1m19.490440961s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-392851
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-392851
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-392851 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-392851 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m14.453352405s)
helpers_test.go:175: Cleaning up "missing-upgrade-392851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-392851
E1025 21:43:17.917830   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-392851: (2.431662739s)
--- PASS: TestMissingContainerUpgrade (157.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-437445 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-437445 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (96.240013ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-437445] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-437445 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-437445 --driver=docker  --container-runtime=crio: (35.120708304s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-437445 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-437445 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-437445 --no-kubernetes --driver=docker  --container-runtime=crio: (17.097163318s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-437445 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-437445 status -o json: exit status 2 (365.316534ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-437445","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-437445
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-437445: (2.060497828s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-437445 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-437445 --no-kubernetes --driver=docker  --container-runtime=crio: (11.278554323s)
--- PASS: TestNoKubernetes/serial/Start (11.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-437445 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-437445 "sudo systemctl is-active --quiet service kubelet": exit status 1 (338.3989ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-437445
E1025 21:41:50.390085   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-437445: (1.228715444s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-437445 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-437445 --driver=docker  --container-runtime=crio: (9.323162635s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-437445 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-437445 "sudo systemctl is-active --quiet service kubelet": exit status 1 (363.975556ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-245646 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-245646 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (739.160682ms)

                                                
                                                
-- stdout --
	* [false-245646] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:42:06.704679  177343 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:42:06.705698  177343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:42:06.705731  177343 out.go:309] Setting ErrFile to fd 2...
	I1025 21:42:06.705741  177343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:42:06.706127  177343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-11542/.minikube/bin
	I1025 21:42:06.706807  177343 out.go:303] Setting JSON to false
	I1025 21:42:06.708500  177343 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5076,"bootTime":1698265051,"procs":458,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:42:06.708567  177343 start.go:138] virtualization: kvm guest
	I1025 21:42:06.749506  177343 out.go:177] * [false-245646] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:42:06.812514  177343 notify.go:220] Checking for updates...
	I1025 21:42:06.812546  177343 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:42:06.874865  177343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:42:06.979195  177343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-11542/kubeconfig
	I1025 21:42:07.030534  177343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-11542/.minikube
	I1025 21:42:07.093446  177343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:42:07.116405  177343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:42:07.179930  177343 config.go:182] Loaded profile config "cert-expiration-909981": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:42:07.180085  177343 config.go:182] Loaded profile config "cert-options-315527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 21:42:07.180219  177343 config.go:182] Loaded profile config "missing-upgrade-392851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1025 21:42:07.180336  177343 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:42:07.205265  177343 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:42:07.205391  177343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:42:07.265288  177343 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:63 SystemTime:2023-10-25 21:42:07.255230647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 21:42:07.265396  177343 docker.go:295] overlay module found
	I1025 21:42:07.356916  177343 out.go:177] * Using the docker driver based on user configuration
	I1025 21:42:07.359423  177343 start.go:298] selected driver: docker
	I1025 21:42:07.359443  177343 start.go:902] validating driver "docker" against <nil>
	I1025 21:42:07.359471  177343 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:42:07.362218  177343 out.go:177] 
	W1025 21:42:07.363721  177343 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1025 21:42:07.365095  177343 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-245646 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-245646" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 25 Oct 2023 21:42:07 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-909981
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt
server: https://127.0.0.1:32928
name: missing-upgrade-392851
contexts:
- context:
cluster: cert-expiration-909981
extensions:
- extension:
last-update: Wed, 25 Oct 2023 21:42:07 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-909981
name: cert-expiration-909981
- context:
cluster: missing-upgrade-392851
user: missing-upgrade-392851
name: missing-upgrade-392851
current-context: cert-expiration-909981
kind: Config
preferences: {}
users:
- name: cert-expiration-909981
user:
client-certificate: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/cert-expiration-909981/client.crt
client-key: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/cert-expiration-909981/client.key
- name: missing-upgrade-392851
user:
client-certificate: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/missing-upgrade-392851/client.crt
client-key: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/missing-upgrade-392851/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-245646

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245646"

                                                
                                                
----------------------- debugLogs end: false-245646 [took: 3.404731582s] --------------------------------
helpers_test.go:175: Cleaning up "false-245646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-245646
--- PASS: TestNetworkPlugins/group/false (4.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestPause/serial/Start (48.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-733283 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-733283 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (48.024795294s)
--- PASS: TestPause/serial/Start (48.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-893609
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.347123941s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-733283 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-733283 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.313673447s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-245646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-245646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-br7cl" [c4ff5423-831d-40df-bd06-d1fa01cb1db9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-br7cl" [c4ff5423-831d-40df-bd06-d1fa01cb1db9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.010293766s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-245646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-733283 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-733283 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-733283 --output=json --layout=cluster: exit status 2 (404.509503ms)

                                                
                                                
-- stdout --
	{"Name":"pause-733283","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-733283","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-733283 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (39.230905263s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.23s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-733283 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-733283 --alsologtostderr -v=5: (1.019557956s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-733283 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-733283 --alsologtostderr -v=5: (4.981459972s)
--- PASS: TestPause/serial/DeletePaused (4.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (65.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m5.74378707s)
--- PASS: TestNetworkPlugins/group/calico/Start (65.74s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.72512429s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-733283
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-733283: exit status 1 (25.133262ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-733283: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.260824971s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nz6fv" [03b25da1-d13e-4059-a467-b5dcc7c89cf7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.020252538s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (41.289864655s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-245646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-245646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bqx5b" [f2929acd-e50a-40cf-82c8-35f4d6c7b371] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bqx5b" [f2929acd-e50a-40cf-82c8-35f4d6c7b371] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.01097834s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-245646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2k2h7" [47a794d9-d520-4fd5-b64f-994ad7416598] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.021160566s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-245646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-245646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wzl5m" [0458a70e-1590-48f1-96f9-95175601414b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wzl5m" [0458a70e-1590-48f1-96f9-95175601414b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010506835s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-245646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-245646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gw9j4" [f6bc5b67-61d3-4771-82cd-a22faf7b315f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gw9j4" [f6bc5b67-61d3-4771-82cd-a22faf7b315f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.01059193s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.824890395s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-245646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-245646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-46k77" [dcbf41ee-9c05-4aab-9216-0ec74b9cde34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-46k77" [dcbf41ee-9c05-4aab-9216-0ec74b9cde34] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.009858065s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-245646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-245646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-245646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-245646 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (39.835569107s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (134.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-499881 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-499881 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m14.80651697s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (134.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-621610 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-621610 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (53.666621679s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mb48d" [864ecc93-404a-4aaa-adf8-5de6e7edbfd4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019180379s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-245646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-245646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7l4gw" [91ee5331-cfb8-464b-a760-2b1206004b77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7l4gw" [91ee5331-cfb8-464b-a760-2b1206004b77] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.009492775s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-245646 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-245646 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qqgwf" [9f71768a-13e8-45ad-9d1e-18c0c1f8cc68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qqgwf" [9f71768a-13e8-45ad-9d1e-18c0c1f8cc68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.008994342s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-245646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (32.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-245646 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-245646 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149770942s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-245646 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-245646 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16874822s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-245646 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (32.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-621610 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6895bf9d-f888-43ea-b5ed-8c3bd3031fbe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6895bf9d-f888-43ea-b5ed-8c3bd3031fbe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.016951555s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-621610 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-817068 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-817068 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (39.954739845s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (39.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-621610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-621610 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-621610 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-621610 --alsologtostderr -v=3: (12.006684044s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-245646 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-621610 -n no-preload-621610
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-621610 -n no-preload-621610: exit status 7 (94.856692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-621610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (338.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-621610 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1025 21:48:17.918737   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-621610 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m38.355997161s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-621610 -n no-preload-621610
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (338.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-022347 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-022347 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (50.562500723s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-817068 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6d4ec03b-6f4e-4301-8efc-9aa12dcebd84] Pending
helpers_test.go:344: "busybox" [6d4ec03b-6f4e-4301-8efc-9aa12dcebd84] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6d4ec03b-6f4e-4301-8efc-9aa12dcebd84] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.300496298s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-817068 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-817068 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-817068 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-817068 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-817068 --alsologtostderr -v=3: (11.916515165s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817068 -n embed-certs-817068
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817068 -n embed-certs-817068: exit status 7 (77.02065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-817068 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (335.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-817068 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-817068 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m34.902745068s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817068 -n embed-certs-817068
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (335.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-499881 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4ab7554e-9aea-4304-9f2e-233040faf650] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4ab7554e-9aea-4304-9f2e-233040faf650] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013197428s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-499881 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-499881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-499881 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-499881 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-499881 --alsologtostderr -v=3: (11.990489249s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-022347 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c46c2613-42bf-4990-b3fc-708528e442eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c46c2613-42bf-4990-b3fc-708528e442eb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.015332709s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-022347 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499881 -n old-k8s-version-499881
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499881 -n old-k8s-version-499881: exit status 7 (77.499161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-499881 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (417.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-499881 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-499881 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (6m56.83152618s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499881 -n old-k8s-version-499881
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (417.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-022347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-022347 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-022347 --alsologtostderr -v=3
E1025 21:49:26.671236   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:26.676507   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:26.686757   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:26.706948   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:26.747237   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:26.827737   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:26.988136   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:27.308548   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:27.949574   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:29.230160   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:31.790399   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:49:36.911348   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-022347 --alsologtostderr -v=3: (14.536509867s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022347 -n default-k8s-diff-port-022347
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022347 -n default-k8s-diff-port-022347: exit status 7 (79.270949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-022347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (342.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-022347 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1025 21:49:46.819209   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/functional-947891/client.crt: no such file or directory
E1025 21:49:47.152316   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:50:07.632472   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:50:34.245053   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:34.250337   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:34.260611   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:34.280863   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:34.321126   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:34.401482   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:34.562179   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:34.882613   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:35.523476   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:36.803923   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:39.364355   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:44.484873   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:50:48.593015   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:50:54.725022   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:51:06.752523   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:06.757802   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:06.768041   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:06.788324   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:06.828690   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:06.908980   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:07.069421   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:07.390091   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:08.030545   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:09.311174   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:11.871668   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:11.968896   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:11.974152   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:11.984446   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:12.004726   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:12.045017   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:12.125356   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:12.285826   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:12.606309   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:13.247329   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:14.528102   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:15.206071   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:51:16.992494   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:17.088774   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:20.336354   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:20.341630   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:20.351873   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:20.372151   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:20.412572   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:20.492924   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:20.653333   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:20.962931   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:51:20.974390   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:21.615154   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:22.209652   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:22.896329   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:25.457423   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:27.233217   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:30.577566   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:32.449872   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:40.818413   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:51:47.714265   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:51:50.390129   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/ingress-addon-legacy-620621/client.crt: no such file or directory
E1025 21:51:52.931056   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:51:56.166698   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:52:01.299184   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:52:10.514002   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:52:12.925107   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:12.930358   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:12.940600   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:12.960854   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:13.001094   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:13.081385   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:13.241876   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:13.563029   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:14.204098   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:15.484707   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:18.045577   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:23.166364   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:24.889518   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:24.894773   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:24.905032   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:24.925295   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:24.965586   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:25.045924   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:25.207011   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:25.528129   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:26.168600   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:27.449568   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:28.674647   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
E1025 21:52:30.009917   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:33.407374   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:52:33.891530   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
E1025 21:52:35.130395   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:42.260042   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
E1025 21:52:45.370592   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:52:53.888505   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:53:05.851232   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
E1025 21:53:17.917979   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/addons-276457/client.crt: no such file or directory
E1025 21:53:18.087274   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
E1025 21:53:34.849142   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:53:46.811757   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-022347 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m42.597015105s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022347 -n default-k8s-diff-port-022347
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (342.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vfxjq" [f638ab50-ed52-40ad-a085-27faa1969bf7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1025 21:53:50.594864   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/calico-245646/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vfxjq" [f638ab50-ed52-40ad-a085-27faa1969bf7] Running
E1025 21:53:55.812262   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/custom-flannel-245646/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.016616974s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vfxjq" [f638ab50-ed52-40ad-a085-27faa1969bf7] Running
E1025 21:54:04.180188   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008272016s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-621610 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-621610 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-621610 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-621610 -n no-preload-621610
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-621610 -n no-preload-621610: exit status 2 (336.560654ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-621610 -n no-preload-621610
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-621610 -n no-preload-621610: exit status 2 (333.996471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-621610 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-621610 -n no-preload-621610
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-621610 -n no-preload-621610
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-488509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1025 21:54:26.670896   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-488509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (35.508316578s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pggwj" [eef7d4c9-b70e-4a3d-92e6-1ae9573963ce] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pggwj" [eef7d4c9-b70e-4a3d-92e6-1ae9573963ce] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.018955952s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pggwj" [eef7d4c9-b70e-4a3d-92e6-1ae9573963ce] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008939322s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-817068 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-817068 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-817068 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-817068 -n embed-certs-817068
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-817068 -n embed-certs-817068: exit status 2 (373.443147ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-817068 -n embed-certs-817068
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-817068 -n embed-certs-817068: exit status 2 (407.101166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-817068 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-817068 -n embed-certs-817068
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-817068 -n embed-certs-817068
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-488509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-488509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.206970076s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-488509 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-488509 --alsologtostderr -v=3: (1.278781815s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-488509 -n newest-cni-488509
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-488509 -n newest-cni-488509: exit status 7 (78.624822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-488509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-488509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1025 21:54:54.354567   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/auto-245646/client.crt: no such file or directory
E1025 21:54:56.769400   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/flannel-245646/client.crt: no such file or directory
E1025 21:55:08.732322   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/bridge-245646/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-488509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (25.597711856s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-488509 -n newest-cni-488509
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-488509 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-488509 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-488509 -n newest-cni-488509
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-488509 -n newest-cni-488509: exit status 2 (338.660846ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-488509 -n newest-cni-488509
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-488509 -n newest-cni-488509: exit status 2 (325.831362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-488509 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-488509 -n newest-cni-488509
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-488509 -n newest-cni-488509
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6lwgq" [39e9f44e-b9a0-4e64-bcf0-a4627b0b78eb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6lwgq" [39e9f44e-b9a0-4e64-bcf0-a4627b0b78eb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.015558332s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6lwgq" [39e9f44e-b9a0-4e64-bcf0-a4627b0b78eb] Running
E1025 21:55:34.245257   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/kindnet-245646/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008759121s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-022347 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-022347 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-022347 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-022347 -n default-k8s-diff-port-022347
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-022347 -n default-k8s-diff-port-022347: exit status 2 (284.869283ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-022347 -n default-k8s-diff-port-022347
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-022347 -n default-k8s-diff-port-022347: exit status 2 (294.442282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-022347 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-022347 -n default-k8s-diff-port-022347
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-022347 -n default-k8s-diff-port-022347
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7hvrm" [393d938a-3e11-4180-a75d-cbfbc662e283] Running
E1025 21:56:20.336156   18323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/enable-default-cni-245646/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013704302s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7hvrm" [393d938a-3e11-4180-a75d-cbfbc662e283] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00801792s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-499881 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-499881 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-499881 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499881 -n old-k8s-version-499881
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499881 -n old-k8s-version-499881: exit status 2 (281.746554ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-499881 -n old-k8s-version-499881
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-499881 -n old-k8s-version-499881: exit status 2 (284.820828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-499881 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499881 -n old-k8s-version-499881
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-499881 -n old-k8s-version-499881
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.56s)

                                                
                                    

Test skip (24/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-245646 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-245646" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt
server: https://127.0.0.1:32928
name: missing-upgrade-392851
contexts:
- context:
cluster: missing-upgrade-392851
user: missing-upgrade-392851
name: missing-upgrade-392851
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-392851
user:
client-certificate: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/missing-upgrade-392851/client.crt
client-key: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/missing-upgrade-392851/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-245646

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245646"

                                                
                                                
----------------------- debugLogs end: kubenet-245646 [took: 3.586978705s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-245646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-245646
--- SKIP: TestNetworkPlugins/group/kubenet (3.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-245646 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-245646" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 25 Oct 2023 21:42:07 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-909981
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17488-11542/.minikube/ca.crt
server: https://127.0.0.1:32928
name: missing-upgrade-392851
contexts:
- context:
cluster: cert-expiration-909981
extensions:
- extension:
last-update: Wed, 25 Oct 2023 21:42:07 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-909981
name: cert-expiration-909981
- context:
cluster: missing-upgrade-392851
user: missing-upgrade-392851
name: missing-upgrade-392851
current-context: cert-expiration-909981
kind: Config
preferences: {}
users:
- name: cert-expiration-909981
user:
client-certificate: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/cert-expiration-909981/client.crt
client-key: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/cert-expiration-909981/client.key
- name: missing-upgrade-392851
user:
client-certificate: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/missing-upgrade-392851/client.crt
client-key: /home/jenkins/minikube-integration/17488-11542/.minikube/profiles/missing-upgrade-392851/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-245646

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-245646" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245646"

                                                
                                                
----------------------- debugLogs end: cilium-245646 [took: 3.634365233s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-245646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-245646
--- SKIP: TestNetworkPlugins/group/cilium (3.81s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-013885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-013885
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard