Test Report: Docker_Linux_crio 17857

                    
                      6e3ba89264b64b7b6259573ef051dd85e83461cf:2023-12-26:32448
                    
                

Test fail (6/316)

Order failed test Duration
35 TestAddons/parallel/Ingress 151.49
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 10.04
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 184.72
217 TestMultiNode/serial/PingHostFrom2Pods 3.03
239 TestRunningBinaryUpgrade 94.34
265 TestStoppedBinaryUpgrade/Upgrade 65.89
x
+
TestAddons/parallel/Ingress (151.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-989445 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-989445 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-989445 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [79fe629c-9e10-4151-b789-c03be89af537] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [79fe629c-9e10-4151-b789-c03be89af537] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.005309717s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-989445 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.283307143s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-989445 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-989445 addons disable ingress-dns --alsologtostderr -v=1: (1.395761285s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-989445 addons disable ingress --alsologtostderr -v=1: (7.630833098s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-989445
helpers_test.go:235: (dbg) docker inspect addons-989445:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5a299fb3436b49784279acf8660926c349906abab9a80a64bc80ce47a0e77806",
	        "Created": "2023-12-26T21:45:17.524948926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 15720,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T21:45:17.899305279Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/5a299fb3436b49784279acf8660926c349906abab9a80a64bc80ce47a0e77806/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5a299fb3436b49784279acf8660926c349906abab9a80a64bc80ce47a0e77806/hostname",
	        "HostsPath": "/var/lib/docker/containers/5a299fb3436b49784279acf8660926c349906abab9a80a64bc80ce47a0e77806/hosts",
	        "LogPath": "/var/lib/docker/containers/5a299fb3436b49784279acf8660926c349906abab9a80a64bc80ce47a0e77806/5a299fb3436b49784279acf8660926c349906abab9a80a64bc80ce47a0e77806-json.log",
	        "Name": "/addons-989445",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-989445:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-989445",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e17b3e81117eae22d79a0948715fdb14ed4ed051b6dff7ec5e927d2c244c6c78-init/diff:/var/lib/docker/overlay2/9309fabaee2d1c218955e7e97c12621fc2771807097b157c41ecafdb1f7c4f26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e17b3e81117eae22d79a0948715fdb14ed4ed051b6dff7ec5e927d2c244c6c78/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e17b3e81117eae22d79a0948715fdb14ed4ed051b6dff7ec5e927d2c244c6c78/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e17b3e81117eae22d79a0948715fdb14ed4ed051b6dff7ec5e927d2c244c6c78/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-989445",
	                "Source": "/var/lib/docker/volumes/addons-989445/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-989445",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-989445",
	                "name.minikube.sigs.k8s.io": "addons-989445",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "87447dbbc0c0f1d58bd833e4bb82bc831f2edeeb01f939e1af2564cf5eaed030",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/87447dbbc0c0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-989445": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5a299fb3436b",
	                        "addons-989445"
	                    ],
	                    "NetworkID": "8ec53727658e7f4557bc01c2564f82f1448ebc5c2a6913f8e9432ba7e94da2ce",
	                    "EndpointID": "edbdcefccaed4a2aabe4c3209d1eb3c2bd3a1ce65dfc5bfc64bb980db9cf2e2f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-989445 -n addons-989445
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-989445 logs -n 25: (1.242233017s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-551448                                                                     | download-only-551448   | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC | 26 Dec 23 21:44 UTC |
	| delete  | -p download-only-551448                                                                     | download-only-551448   | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC | 26 Dec 23 21:44 UTC |
	| start   | --download-only -p                                                                          | download-docker-088475 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |                     |
	|         | download-docker-088475                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-088475                                                                   | download-docker-088475 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC | 26 Dec 23 21:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-515766   | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |                     |
	|         | binary-mirror-515766                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38627                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-515766                                                                     | binary-mirror-515766   | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC | 26 Dec 23 21:44 UTC |
	| addons  | enable dashboard -p                                                                         | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |                     |
	|         | addons-989445                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |                     |
	|         | addons-989445                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-989445 --wait=true                                                                | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC | 26 Dec 23 21:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-989445 addons                                                                        | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-989445 ssh cat                                                                       | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	|         | /opt/local-path-provisioner/pvc-52463b29-232e-44d1-8a86-3781624e9cfa_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-989445 addons disable                                                                | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	|         | -p addons-989445                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	|         | addons-989445                                                                               |                        |         |         |                     |                     |
	| ip      | addons-989445 ip                                                                            | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	| addons  | addons-989445 addons disable                                                                | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	|         | addons-989445                                                                               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	|         | -p addons-989445                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-989445 ssh curl -s                                                                   | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-989445 addons disable                                                                | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-989445 addons                                                                        | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-989445 addons                                                                        | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-989445 ip                                                                            | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:49 UTC | 26 Dec 23 21:49 UTC |
	| addons  | addons-989445 addons disable                                                                | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:49 UTC | 26 Dec 23 21:49 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-989445 addons disable                                                                | addons-989445          | jenkins | v1.32.0 | 26 Dec 23 21:49 UTC | 26 Dec 23 21:50 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:44:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:44:54.268223   15036 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:44:54.268556   15036 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:54.268567   15036 out.go:309] Setting ErrFile to fd 2...
	I1226 21:44:54.268573   15036 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:54.268856   15036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	I1226 21:44:54.269717   15036 out.go:303] Setting JSON to false
	I1226 21:44:54.270863   15036 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1645,"bootTime":1703625450,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 21:44:54.270953   15036 start.go:138] virtualization: kvm guest
	I1226 21:44:54.273231   15036 out.go:177] * [addons-989445] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1226 21:44:54.274932   15036 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 21:44:54.275007   15036 notify.go:220] Checking for updates...
	I1226 21:44:54.276229   15036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:44:54.277612   15036 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 21:44:54.278726   15036 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 21:44:54.279928   15036 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1226 21:44:54.281090   15036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 21:44:54.282519   15036 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 21:44:54.307956   15036 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:44:54.308134   15036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:44:54.366281   15036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-26 21:44:54.357380998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 21:44:54.366375   15036 docker.go:295] overlay module found
	I1226 21:44:54.368176   15036 out.go:177] * Using the docker driver based on user configuration
	I1226 21:44:54.369237   15036 start.go:298] selected driver: docker
	I1226 21:44:54.369248   15036 start.go:902] validating driver "docker" against <nil>
	I1226 21:44:54.369266   15036 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 21:44:54.369978   15036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:44:54.422858   15036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-26 21:44:54.415482622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 21:44:54.423052   15036 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 21:44:54.423311   15036 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 21:44:54.424983   15036 out.go:177] * Using Docker driver with root privileges
	I1226 21:44:54.426298   15036 cni.go:84] Creating CNI manager for ""
	I1226 21:44:54.426319   15036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:44:54.426331   15036 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 21:44:54.426342   15036 start_flags.go:323] config:
	{Name:addons-989445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-989445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:44:54.427802   15036 out.go:177] * Starting control plane node addons-989445 in cluster addons-989445
	I1226 21:44:54.429060   15036 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 21:44:54.430364   15036 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 21:44:54.431527   15036 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:44:54.431554   15036 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 21:44:54.431570   15036 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1226 21:44:54.431578   15036 cache.go:56] Caching tarball of preloaded images
	I1226 21:44:54.431661   15036 preload.go:174] Found /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1226 21:44:54.431676   15036 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1226 21:44:54.432004   15036 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/config.json ...
	I1226 21:44:54.432028   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/config.json: {Name:mk0d646c8e8dc47be53c7f189aab01fe442663f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:44:54.447643   15036 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 21:44:54.447763   15036 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 21:44:54.447780   15036 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I1226 21:44:54.447785   15036 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I1226 21:44:54.447796   15036 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I1226 21:44:54.447806   15036 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from local cache
	I1226 21:45:07.822810   15036 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from cached tarball
	I1226 21:45:07.822868   15036 cache.go:194] Successfully downloaded all kic artifacts
	I1226 21:45:07.822931   15036 start.go:365] acquiring machines lock for addons-989445: {Name:mk899f6f261b2c3445a81bc6460b43ef9d36e2d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 21:45:07.823054   15036 start.go:369] acquired machines lock for "addons-989445" in 99.955µs
	I1226 21:45:07.823090   15036 start.go:93] Provisioning new machine with config: &{Name:addons-989445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-989445 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 21:45:07.823219   15036 start.go:125] createHost starting for "" (driver="docker")
	I1226 21:45:07.903710   15036 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1226 21:45:07.904126   15036 start.go:159] libmachine.API.Create for "addons-989445" (driver="docker")
	I1226 21:45:07.904179   15036 client.go:168] LocalClient.Create starting
	I1226 21:45:07.904341   15036 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem
	I1226 21:45:07.988394   15036 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem
	I1226 21:45:08.221229   15036 cli_runner.go:164] Run: docker network inspect addons-989445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 21:45:08.240732   15036 cli_runner.go:211] docker network inspect addons-989445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 21:45:08.240845   15036 network_create.go:281] running [docker network inspect addons-989445] to gather additional debugging logs...
	I1226 21:45:08.240872   15036 cli_runner.go:164] Run: docker network inspect addons-989445
	W1226 21:45:08.258055   15036 cli_runner.go:211] docker network inspect addons-989445 returned with exit code 1
	I1226 21:45:08.258083   15036 network_create.go:284] error running [docker network inspect addons-989445]: docker network inspect addons-989445: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-989445 not found
	I1226 21:45:08.258094   15036 network_create.go:286] output of [docker network inspect addons-989445]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-989445 not found
	
	** /stderr **
	I1226 21:45:08.258199   15036 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 21:45:08.275761   15036 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000b6f610}
	I1226 21:45:08.275801   15036 network_create.go:124] attempt to create docker network addons-989445 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1226 21:45:08.275846   15036 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-989445 addons-989445
	I1226 21:45:08.467352   15036 network_create.go:108] docker network addons-989445 192.168.49.0/24 created
	I1226 21:45:08.467399   15036 kic.go:121] calculated static IP "192.168.49.2" for the "addons-989445" container
	I1226 21:45:08.467513   15036 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 21:45:08.485143   15036 cli_runner.go:164] Run: docker volume create addons-989445 --label name.minikube.sigs.k8s.io=addons-989445 --label created_by.minikube.sigs.k8s.io=true
	I1226 21:45:08.523813   15036 oci.go:103] Successfully created a docker volume addons-989445
	I1226 21:45:08.523976   15036 cli_runner.go:164] Run: docker run --rm --name addons-989445-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-989445 --entrypoint /usr/bin/test -v addons-989445:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 21:45:12.083757   15036 cli_runner.go:217] Completed: docker run --rm --name addons-989445-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-989445 --entrypoint /usr/bin/test -v addons-989445:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (3.559741771s)
	I1226 21:45:12.083780   15036 oci.go:107] Successfully prepared a docker volume addons-989445
	I1226 21:45:12.083802   15036 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:45:12.083818   15036 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 21:45:12.083871   15036 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-989445:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 21:45:17.456457   15036 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-989445:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.372527234s)
	I1226 21:45:17.456488   15036 kic.go:203] duration metric: took 5.372667 seconds to extract preloaded images to volume
	W1226 21:45:17.456678   15036 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 21:45:17.456801   15036 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 21:45:17.508294   15036 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-989445 --name addons-989445 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-989445 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-989445 --network addons-989445 --ip 192.168.49.2 --volume addons-989445:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 21:45:17.908800   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Running}}
	I1226 21:45:17.926483   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:17.945144   15036 cli_runner.go:164] Run: docker exec addons-989445 stat /var/lib/dpkg/alternatives/iptables
	I1226 21:45:18.025945   15036 oci.go:144] the created container "addons-989445" has a running status.
	I1226 21:45:18.025991   15036 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa...
	I1226 21:45:18.589619   15036 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 21:45:18.615688   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:18.634029   15036 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 21:45:18.634045   15036 kic_runner.go:114] Args: [docker exec --privileged addons-989445 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 21:45:18.688871   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:18.708052   15036 machine.go:88] provisioning docker machine ...
	I1226 21:45:18.708094   15036 ubuntu.go:169] provisioning hostname "addons-989445"
	I1226 21:45:18.708172   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:18.726177   15036 main.go:141] libmachine: Using SSH client type: native
	I1226 21:45:18.726576   15036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1226 21:45:18.726596   15036 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-989445 && echo "addons-989445" | sudo tee /etc/hostname
	I1226 21:45:18.862043   15036 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-989445
	
	I1226 21:45:18.862126   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:18.880612   15036 main.go:141] libmachine: Using SSH client type: native
	I1226 21:45:18.880959   15036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1226 21:45:18.880989   15036 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-989445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-989445/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-989445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 21:45:19.002982   15036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 21:45:19.003011   15036 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-7214/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-7214/.minikube}
	I1226 21:45:19.003054   15036 ubuntu.go:177] setting up certificates
	I1226 21:45:19.003068   15036 provision.go:83] configureAuth start
	I1226 21:45:19.003128   15036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-989445
	I1226 21:45:19.018923   15036 provision.go:138] copyHostCerts
	I1226 21:45:19.019005   15036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem (1082 bytes)
	I1226 21:45:19.019134   15036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem (1123 bytes)
	I1226 21:45:19.019227   15036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem (1679 bytes)
	I1226 21:45:19.019309   15036 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem org=jenkins.addons-989445 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-989445]
	I1226 21:45:19.163211   15036 provision.go:172] copyRemoteCerts
	I1226 21:45:19.163275   15036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 21:45:19.163316   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:19.183757   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:19.276281   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 21:45:19.300287   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 21:45:19.325023   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1226 21:45:19.346079   15036 provision.go:86] duration metric: configureAuth took 342.991766ms
	I1226 21:45:19.346123   15036 ubuntu.go:193] setting minikube options for container-runtime
	I1226 21:45:19.346290   15036 config.go:182] Loaded profile config "addons-989445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:45:19.346427   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:19.364197   15036 main.go:141] libmachine: Using SSH client type: native
	I1226 21:45:19.364591   15036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1226 21:45:19.364619   15036 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 21:45:19.585405   15036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 21:45:19.585432   15036 machine.go:91] provisioned docker machine in 877.350561ms
	I1226 21:45:19.585442   15036 client.go:171] LocalClient.Create took 11.681254406s
	I1226 21:45:19.585457   15036 start.go:167] duration metric: libmachine.API.Create for "addons-989445" took 11.681337711s
	I1226 21:45:19.585465   15036 start.go:300] post-start starting for "addons-989445" (driver="docker")
	I1226 21:45:19.585476   15036 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 21:45:19.585519   15036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 21:45:19.585556   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:19.606150   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:19.695784   15036 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 21:45:19.699095   15036 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 21:45:19.699136   15036 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 21:45:19.699147   15036 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 21:45:19.699153   15036 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 21:45:19.699163   15036 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/addons for local assets ...
	I1226 21:45:19.699224   15036 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/files for local assets ...
	I1226 21:45:19.699244   15036 start.go:303] post-start completed in 113.772263ms
	I1226 21:45:19.699510   15036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-989445
	I1226 21:45:19.717723   15036 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/config.json ...
	I1226 21:45:19.718073   15036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 21:45:19.718127   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:19.737389   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:19.827768   15036 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 21:45:19.831776   15036 start.go:128] duration metric: createHost completed in 12.008535838s
	I1226 21:45:19.831806   15036 start.go:83] releasing machines lock for "addons-989445", held for 12.00873599s
	I1226 21:45:19.831889   15036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-989445
	I1226 21:45:19.850479   15036 ssh_runner.go:195] Run: cat /version.json
	I1226 21:45:19.850507   15036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 21:45:19.850561   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:19.850568   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:19.871112   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:19.871852   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:20.048920   15036 ssh_runner.go:195] Run: systemctl --version
	I1226 21:45:20.053104   15036 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 21:45:20.190796   15036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 21:45:20.195262   15036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 21:45:20.215525   15036 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 21:45:20.215644   15036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 21:45:20.244451   15036 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1226 21:45:20.244479   15036 start.go:475] detecting cgroup driver to use...
	I1226 21:45:20.244508   15036 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 21:45:20.244560   15036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 21:45:20.261192   15036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 21:45:20.271971   15036 docker.go:203] disabling cri-docker service (if available) ...
	I1226 21:45:20.272046   15036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 21:45:20.285136   15036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 21:45:20.298637   15036 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 21:45:20.372133   15036 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 21:45:20.452949   15036 docker.go:219] disabling docker service ...
	I1226 21:45:20.453040   15036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 21:45:20.473546   15036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 21:45:20.485696   15036 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 21:45:20.559529   15036 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 21:45:20.640414   15036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 21:45:20.650507   15036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 21:45:20.665250   15036 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1226 21:45:20.665303   15036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:20.674350   15036 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 21:45:20.674413   15036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:20.685256   15036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:20.695608   15036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:20.706833   15036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 21:45:20.716677   15036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 21:45:20.724651   15036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 21:45:20.732274   15036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 21:45:20.808905   15036 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 21:45:20.913618   15036 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 21:45:20.913718   15036 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 21:45:20.917629   15036 start.go:543] Will wait 60s for crictl version
	I1226 21:45:20.917683   15036 ssh_runner.go:195] Run: which crictl
	I1226 21:45:20.921463   15036 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 21:45:20.957329   15036 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 21:45:20.957502   15036 ssh_runner.go:195] Run: crio --version
	I1226 21:45:20.993436   15036 ssh_runner.go:195] Run: crio --version
	I1226 21:45:21.031192   15036 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1226 21:45:21.032656   15036 cli_runner.go:164] Run: docker network inspect addons-989445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 21:45:21.049691   15036 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1226 21:45:21.053733   15036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 21:45:21.063553   15036 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:45:21.063626   15036 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 21:45:21.121461   15036 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 21:45:21.121482   15036 crio.go:415] Images already preloaded, skipping extraction
	I1226 21:45:21.121520   15036 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 21:45:21.154833   15036 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 21:45:21.154863   15036 cache_images.go:84] Images are preloaded, skipping loading
	I1226 21:45:21.154951   15036 ssh_runner.go:195] Run: crio config
	I1226 21:45:21.199791   15036 cni.go:84] Creating CNI manager for ""
	I1226 21:45:21.199811   15036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:45:21.199828   15036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 21:45:21.199846   15036 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-989445 NodeName:addons-989445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 21:45:21.199964   15036 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-989445"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 21:45:21.200022   15036 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-989445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-989445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 21:45:21.200085   15036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 21:45:21.208638   15036 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 21:45:21.208721   15036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 21:45:21.216617   15036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1226 21:45:21.232679   15036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 21:45:21.250221   15036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1226 21:45:21.268029   15036 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1226 21:45:21.271490   15036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 21:45:21.281515   15036 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445 for IP: 192.168.49.2
	I1226 21:45:21.281543   15036 certs.go:190] acquiring lock for shared ca certs: {Name:mk3336638bd66053c32b2c7f6f2d1c6a563fd761 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:21.281660   15036 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key
	I1226 21:45:21.331019   15036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt ...
	I1226 21:45:21.331045   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt: {Name:mk400ee3b8e57527e98f38cba470dab89ef8541d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:21.331227   15036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key ...
	I1226 21:45:21.331242   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key: {Name:mk657253c86de67f6827ffdfd7bcb9e3fd2d4e7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:21.331341   15036 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key
	I1226 21:45:21.466120   15036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.crt ...
	I1226 21:45:21.466147   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.crt: {Name:mk8b3de3c74eb9d8221753b8a6170eea7c48d961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:21.466306   15036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key ...
	I1226 21:45:21.466316   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key: {Name:mkf765e711b3bace9c525558839dad77d5fd51b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:21.466418   15036 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.key
	I1226 21:45:21.466431   15036 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt with IP's: []
	I1226 21:45:21.605205   15036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt ...
	I1226 21:45:21.605251   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: {Name:mk7b377d02ba3c8478f62d5ff61ecb34aa29a3ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:21.605490   15036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.key ...
	I1226 21:45:21.605501   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.key: {Name:mkbe54afa8831e5972550421d3324380ac426afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:21.605574   15036 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.key.dd3b5fb2
	I1226 21:45:21.605590   15036 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 21:45:21.784616   15036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.crt.dd3b5fb2 ...
	I1226 21:45:21.784664   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.crt.dd3b5fb2: {Name:mk9034a62645616f8c7515f6526350a6ebe353c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:21.784908   15036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.key.dd3b5fb2 ...
	I1226 21:45:21.784925   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.key.dd3b5fb2: {Name:mk708c992b2325f87f92b4514e3dcf4af88e0a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:21.785001   15036 certs.go:337] copying /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.crt
	I1226 21:45:21.785069   15036 certs.go:341] copying /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.key
	I1226 21:45:21.785111   15036 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/proxy-client.key
	I1226 21:45:21.785125   15036 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/proxy-client.crt with IP's: []
	I1226 21:45:22.120207   15036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/proxy-client.crt ...
	I1226 21:45:22.120259   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/proxy-client.crt: {Name:mk05a3885b1d59e6c4ff82a71fe1d6ce8911fd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:22.120507   15036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/proxy-client.key ...
	I1226 21:45:22.120521   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/proxy-client.key: {Name:mk2e9d2a8e579105407ec1e8dac099c8fb5ea206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:22.120743   15036 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem (1679 bytes)
	I1226 21:45:22.120785   15036 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem (1082 bytes)
	I1226 21:45:22.120828   15036 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem (1123 bytes)
	I1226 21:45:22.120862   15036 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem (1679 bytes)
	I1226 21:45:22.121558   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 21:45:22.144935   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1226 21:45:22.169543   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 21:45:22.192984   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1226 21:45:22.216202   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 21:45:22.239351   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 21:45:22.261081   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 21:45:22.285349   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 21:45:22.309754   15036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 21:45:22.333536   15036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 21:45:22.351484   15036 ssh_runner.go:195] Run: openssl version
	I1226 21:45:22.356718   15036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 21:45:22.365583   15036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:45:22.368716   15036 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:45:22.368759   15036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:45:22.374700   15036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 21:45:22.382983   15036 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 21:45:22.386050   15036 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 21:45:22.386098   15036 kubeadm.go:404] StartCluster: {Name:addons-989445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-989445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:45:22.386215   15036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1226 21:45:22.386255   15036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 21:45:22.422369   15036 cri.go:89] found id: ""
	I1226 21:45:22.422445   15036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 21:45:22.431497   15036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 21:45:22.439691   15036 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1226 21:45:22.439745   15036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 21:45:22.446959   15036 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 21:45:22.447004   15036 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1226 21:45:22.531718   15036 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1226 21:45:22.598888   15036 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 21:45:31.568703   15036 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1226 21:45:31.568795   15036 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 21:45:31.568928   15036 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1226 21:45:31.569004   15036 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1226 21:45:31.569052   15036 kubeadm.go:322] OS: Linux
	I1226 21:45:31.569113   15036 kubeadm.go:322] CGROUPS_CPU: enabled
	I1226 21:45:31.569180   15036 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1226 21:45:31.569246   15036 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1226 21:45:31.569311   15036 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1226 21:45:31.569378   15036 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1226 21:45:31.569445   15036 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1226 21:45:31.569507   15036 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1226 21:45:31.569574   15036 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1226 21:45:31.569638   15036 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1226 21:45:31.569735   15036 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 21:45:31.569861   15036 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 21:45:31.569988   15036 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 21:45:31.570074   15036 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 21:45:31.571525   15036 out.go:204]   - Generating certificates and keys ...
	I1226 21:45:31.571619   15036 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 21:45:31.571673   15036 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 21:45:31.571727   15036 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 21:45:31.571779   15036 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 21:45:31.571829   15036 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 21:45:31.571889   15036 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 21:45:31.571932   15036 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 21:45:31.572063   15036 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-989445 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 21:45:31.572139   15036 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 21:45:31.572310   15036 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-989445 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 21:45:31.572420   15036 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 21:45:31.572563   15036 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 21:45:31.572628   15036 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 21:45:31.572696   15036 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 21:45:31.572758   15036 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 21:45:31.572822   15036 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 21:45:31.572933   15036 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 21:45:31.573028   15036 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 21:45:31.573156   15036 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 21:45:31.573270   15036 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 21:45:31.574761   15036 out.go:204]   - Booting up control plane ...
	I1226 21:45:31.574879   15036 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 21:45:31.574967   15036 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 21:45:31.575020   15036 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 21:45:31.575160   15036 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 21:45:31.575299   15036 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 21:45:31.575337   15036 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 21:45:31.575541   15036 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 21:45:31.575642   15036 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001972 seconds
	I1226 21:45:31.575760   15036 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 21:45:31.575929   15036 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 21:45:31.576021   15036 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1226 21:45:31.576267   15036 kubeadm.go:322] [mark-control-plane] Marking the node addons-989445 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1226 21:45:31.576349   15036 kubeadm.go:322] [bootstrap-token] Using token: 0ca7h6.6pjzedh7q6pfdxua
	I1226 21:45:31.577603   15036 out.go:204]   - Configuring RBAC rules ...
	I1226 21:45:31.577739   15036 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 21:45:31.577860   15036 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 21:45:31.578048   15036 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 21:45:31.578156   15036 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 21:45:31.578248   15036 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 21:45:31.578325   15036 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 21:45:31.578435   15036 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 21:45:31.578474   15036 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1226 21:45:31.578521   15036 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1226 21:45:31.578527   15036 kubeadm.go:322] 
	I1226 21:45:31.578593   15036 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1226 21:45:31.578602   15036 kubeadm.go:322] 
	I1226 21:45:31.578688   15036 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1226 21:45:31.578697   15036 kubeadm.go:322] 
	I1226 21:45:31.578728   15036 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1226 21:45:31.578801   15036 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 21:45:31.578866   15036 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 21:45:31.578876   15036 kubeadm.go:322] 
	I1226 21:45:31.578942   15036 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1226 21:45:31.578954   15036 kubeadm.go:322] 
	I1226 21:45:31.579010   15036 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1226 21:45:31.579021   15036 kubeadm.go:322] 
	I1226 21:45:31.579083   15036 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1226 21:45:31.579182   15036 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 21:45:31.579268   15036 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 21:45:31.579279   15036 kubeadm.go:322] 
	I1226 21:45:31.579378   15036 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1226 21:45:31.579477   15036 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1226 21:45:31.579490   15036 kubeadm.go:322] 
	I1226 21:45:31.579587   15036 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0ca7h6.6pjzedh7q6pfdxua \
	I1226 21:45:31.579718   15036 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc \
	I1226 21:45:31.579753   15036 kubeadm.go:322] 	--control-plane 
	I1226 21:45:31.579764   15036 kubeadm.go:322] 
	I1226 21:45:31.579859   15036 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1226 21:45:31.579870   15036 kubeadm.go:322] 
	I1226 21:45:31.579962   15036 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0ca7h6.6pjzedh7q6pfdxua \
	I1226 21:45:31.580102   15036 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc 
	I1226 21:45:31.580117   15036 cni.go:84] Creating CNI manager for ""
	I1226 21:45:31.580134   15036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:45:31.581833   15036 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 21:45:31.583307   15036 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 21:45:31.590324   15036 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 21:45:31.590343   15036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 21:45:31.678218   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 21:45:32.399287   15036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 21:45:32.399364   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:32.399364   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=addons-989445 minikube.k8s.io/updated_at=2023_12_26T21_45_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:32.406641   15036 ops.go:34] apiserver oom_adj: -16
	I1226 21:45:32.499998   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:33.000973   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:33.500498   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:34.000283   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:34.500305   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:35.000099   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:35.500429   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:36.000115   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:36.500487   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:37.000593   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:37.500593   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:38.000072   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:38.500295   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:39.000727   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:39.500580   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:40.000779   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:40.500694   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:41.000051   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:41.500834   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:42.000832   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:42.500076   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:43.000810   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:43.500930   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:44.000784   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:44.500818   15036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:45:44.569011   15036 kubeadm.go:1088] duration metric: took 12.169715625s to wait for elevateKubeSystemPrivileges.
	I1226 21:45:44.569046   15036 kubeadm.go:406] StartCluster complete in 22.182951186s
	I1226 21:45:44.569071   15036 settings.go:142] acquiring lock: {Name:mk12d34f71cd28d3e5987ed147ca378c18cddf69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:44.569192   15036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 21:45:44.569548   15036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/kubeconfig: {Name:mkba7ef3601947363f4aefe62b6956e6c044a4a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:44.569750   15036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 21:45:44.569820   15036 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I1226 21:45:44.569938   15036 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-989445"
	I1226 21:45:44.569995   15036 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-989445"
	I1226 21:45:44.570019   15036 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-989445"
	I1226 21:45:44.570027   15036 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-989445"
	I1226 21:45:44.569946   15036 addons.go:69] Setting yakd=true in profile "addons-989445"
	I1226 21:45:44.569996   15036 addons.go:69] Setting helm-tiller=true in profile "addons-989445"
	I1226 21:45:44.570098   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.570125   15036 addons.go:237] Setting addon helm-tiller=true in "addons-989445"
	I1226 21:45:44.569958   15036 addons.go:69] Setting gcp-auth=true in profile "addons-989445"
	I1226 21:45:44.570099   15036 addons.go:237] Setting addon yakd=true in "addons-989445"
	I1226 21:45:44.570179   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.570216   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.569965   15036 addons.go:69] Setting metrics-server=true in profile "addons-989445"
	I1226 21:45:44.570260   15036 addons.go:237] Setting addon metrics-server=true in "addons-989445"
	I1226 21:45:44.570296   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.570405   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.570576   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.570607   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.570642   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.570748   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.569967   15036 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-989445"
	I1226 21:45:44.570898   15036 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-989445"
	I1226 21:45:44.570955   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.569973   15036 addons.go:69] Setting ingress-dns=true in profile "addons-989445"
	I1226 21:45:44.571173   15036 addons.go:237] Setting addon ingress-dns=true in "addons-989445"
	I1226 21:45:44.571249   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.571419   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.571659   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.569975   15036 addons.go:69] Setting registry=true in profile "addons-989445"
	I1226 21:45:44.573241   15036 addons.go:237] Setting addon registry=true in "addons-989445"
	I1226 21:45:44.573309   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.573847   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.569983   15036 addons.go:69] Setting storage-provisioner=true in profile "addons-989445"
	I1226 21:45:44.577523   15036 addons.go:237] Setting addon storage-provisioner=true in "addons-989445"
	I1226 21:45:44.577619   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.578437   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.569981   15036 addons.go:69] Setting inspektor-gadget=true in profile "addons-989445"
	I1226 21:45:44.583645   15036 addons.go:237] Setting addon inspektor-gadget=true in "addons-989445"
	I1226 21:45:44.584167   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.569973   15036 addons.go:69] Setting ingress=true in profile "addons-989445"
	I1226 21:45:44.584831   15036 addons.go:237] Setting addon ingress=true in "addons-989445"
	I1226 21:45:44.584898   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.585371   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.569980   15036 config.go:182] Loaded profile config "addons-989445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:45:44.569997   15036 addons.go:69] Setting volumesnapshots=true in profile "addons-989445"
	I1226 21:45:44.586561   15036 addons.go:237] Setting addon volumesnapshots=true in "addons-989445"
	I1226 21:45:44.586690   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.586591   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.570174   15036 mustload.go:65] Loading cluster: addons-989445
	I1226 21:45:44.587304   15036 config.go:182] Loaded profile config "addons-989445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:45:44.587524   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.587742   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.634307   15036 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1226 21:45:44.636096   15036 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1226 21:45:44.636142   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1226 21:45:44.637892   15036 out.go:177]   - Using image docker.io/registry:2.8.3
	I1226 21:45:44.636407   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.634895   15036 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-989445"
	I1226 21:45:44.569947   15036 addons.go:69] Setting default-storageclass=true in profile "addons-989445"
	I1226 21:45:44.569956   15036 addons.go:69] Setting cloud-spanner=true in profile "addons-989445"
	I1226 21:45:44.642896   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.642924   15036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-989445"
	I1226 21:45:44.642995   15036 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1226 21:45:44.647673   15036 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1226 21:45:44.647803   15036 addons.go:237] Setting addon cloud-spanner=true in "addons-989445"
	I1226 21:45:44.648966   15036 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1226 21:45:44.649049   15036 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1226 21:45:44.649830   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.649843   15036 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1226 21:45:44.650124   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.651849   15036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 21:45:44.651951   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.653790   15036 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I1226 21:45:44.655778   15036 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1226 21:45:44.655886   15036 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1226 21:45:44.656392   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.656872   15036 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I1226 21:45:44.657797   15036 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1226 21:45:44.657881   15036 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1226 21:45:44.657923   15036 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1226 21:45:44.658012   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1226 21:45:44.658081   15036 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 21:45:44.659006   15036 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1226 21:45:44.659019   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 21:45:44.659021   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1226 21:45:44.659031   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.659073   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.659076   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.659212   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1226 21:45:44.659258   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.658094   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1226 21:45:44.660920   15036 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1226 21:45:44.660935   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1226 21:45:44.660942   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.662111   15036 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1226 21:45:44.658084   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1226 21:45:44.660783   15036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1226 21:45:44.660978   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.663471   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.664864   15036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1226 21:45:44.663765   15036 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1226 21:45:44.665998   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1226 21:45:44.666047   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.667588   15036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1226 21:45:44.668870   15036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1226 21:45:44.670578   15036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1226 21:45:44.671860   15036 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1226 21:45:44.672948   15036 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1226 21:45:44.672967   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1226 21:45:44.673028   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.674043   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.682560   15036 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I1226 21:45:44.683896   15036 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:45:44.685553   15036 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:45:44.687258   15036 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 21:45:44.687287   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1226 21:45:44.687395   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.700034   15036 addons.go:237] Setting addon default-storageclass=true in "addons-989445"
	I1226 21:45:44.700079   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.700572   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:44.712777   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:44.715805   15036 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1226 21:45:44.717064   15036 out.go:177]   - Using image docker.io/busybox:stable
	I1226 21:45:44.718538   15036 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1226 21:45:44.718560   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1226 21:45:44.718618   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.721454   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.732858   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.738602   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.740569   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.747457   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.748347   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.752162   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.757021   15036 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I1226 21:45:44.755468   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.762290   15036 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I1226 21:45:44.762305   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1226 21:45:44.762321   15036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1226 21:45:44.762363   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.764785   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.765110   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.769988   15036 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 21:45:44.770015   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 21:45:44.770083   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:44.781100   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.803729   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:44.810694   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:45.056807   15036 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1226 21:45:45.056836   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1226 21:45:45.061627   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 21:45:45.062066   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1226 21:45:45.079566   15036 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-989445" context rescaled to 1 replicas
	I1226 21:45:45.079699   15036 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 21:45:45.081506   15036 out.go:177] * Verifying Kubernetes components...
	I1226 21:45:45.083048   15036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 21:45:45.164901   15036 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1226 21:45:45.164974   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1226 21:45:45.257266   15036 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1226 21:45:45.257383   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1226 21:45:45.263000   15036 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1226 21:45:45.263085   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1226 21:45:45.274043   15036 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I1226 21:45:45.274077   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1226 21:45:45.275495   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1226 21:45:45.360340   15036 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1226 21:45:45.360369   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1226 21:45:45.372397   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1226 21:45:45.457440   15036 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1226 21:45:45.457477   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1226 21:45:45.472240   15036 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I1226 21:45:45.472359   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1226 21:45:45.473453   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 21:45:45.475226   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1226 21:45:45.556522   15036 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1226 21:45:45.556672   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1226 21:45:45.563579   15036 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1226 21:45:45.563649   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1226 21:45:45.569930   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 21:45:45.581284   15036 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1226 21:45:45.581318   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1226 21:45:45.662463   15036 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1226 21:45:45.662490   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1226 21:45:45.664321   15036 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1226 21:45:45.664343   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1226 21:45:45.665598   15036 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1226 21:45:45.665617   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1226 21:45:45.770598   15036 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1226 21:45:45.770625   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1226 21:45:45.862624   15036 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1226 21:45:45.862709   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1226 21:45:45.970951   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1226 21:45:45.974034   15036 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1226 21:45:45.974136   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1226 21:45:46.056219   15036 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1226 21:45:46.056246   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1226 21:45:46.056391   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1226 21:45:46.173820   15036 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1226 21:45:46.173866   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1226 21:45:46.265398   15036 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1226 21:45:46.265425   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1226 21:45:46.367739   15036 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1226 21:45:46.367849   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1226 21:45:46.466064   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1226 21:45:46.468299   15036 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1226 21:45:46.468353   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1226 21:45:46.569678   15036 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1226 21:45:46.569776   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1226 21:45:46.573909   15036 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1226 21:45:46.574015   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1226 21:45:46.876656   15036 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1226 21:45:46.876771   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1226 21:45:46.964103   15036 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1226 21:45:46.964136   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1226 21:45:47.063928   15036 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1226 21:45:47.063981   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1226 21:45:47.275479   15036 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I1226 21:45:47.275528   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1226 21:45:47.278482   15036 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1226 21:45:47.278505   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1226 21:45:47.362096   15036 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.599737383s)
	I1226 21:45:47.362139   15036 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1226 21:45:47.374361   15036 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:45:47.374392   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1226 21:45:47.573091   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1226 21:45:47.671552   15036 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1226 21:45:47.671706   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1226 21:45:47.674575   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:45:47.767283   15036 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1226 21:45:47.767390   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1226 21:45:47.964721   15036 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1226 21:45:47.964827   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1226 21:45:48.358053   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1226 21:45:48.665578   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1226 21:45:50.462000   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.400332181s)
	I1226 21:45:50.462085   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.399998562s)
	I1226 21:45:50.462135   15036 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.379064513s)
	I1226 21:45:50.463132   15036 node_ready.go:35] waiting up to 6m0s for node "addons-989445" to be "Ready" ...
	I1226 21:45:50.463354   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.187826982s)
	I1226 21:45:50.463449   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.090997885s)
	I1226 21:45:51.563269   15036 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1226 21:45:51.563472   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:51.591556   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:51.986918   15036 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1226 21:45:52.080177   15036 addons.go:237] Setting addon gcp-auth=true in "addons-989445"
	I1226 21:45:52.080249   15036 host.go:66] Checking if "addons-989445" exists ...
	I1226 21:45:52.080777   15036 cli_runner.go:164] Run: docker container inspect addons-989445 --format={{.State.Status}}
	I1226 21:45:52.102477   15036 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1226 21:45:52.102534   15036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-989445
	I1226 21:45:52.119048   15036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/addons-989445/id_rsa Username:docker}
	I1226 21:45:52.375155   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.901656607s)
	I1226 21:45:52.375193   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.899882231s)
	I1226 21:45:52.375224   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.805256667s)
	I1226 21:45:52.375347   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.404276117s)
	I1226 21:45:52.375192   15036 addons.go:473] Verifying addon ingress=true in "addons-989445"
	I1226 21:45:52.376926   15036 out.go:177] * Verifying ingress addon...
	I1226 21:45:52.375437   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.319024737s)
	I1226 21:45:52.375471   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.909379637s)
	I1226 21:45:52.375516   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.80238932s)
	I1226 21:45:52.375610   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.700908727s)
	I1226 21:45:52.375690   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.017541557s)
	I1226 21:45:52.378234   15036 addons.go:473] Verifying addon metrics-server=true in "addons-989445"
	I1226 21:45:52.378258   15036 addons.go:473] Verifying addon registry=true in "addons-989445"
	I1226 21:45:52.379745   15036 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-989445 service yakd-dashboard -n yakd-dashboard
	
	
	W1226 21:45:52.378273   15036 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1226 21:45:52.379116   15036 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1226 21:45:52.381118   15036 out.go:177] * Verifying registry addon...
	I1226 21:45:52.381135   15036 retry.go:31] will retry after 207.380422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	W1226 21:45:52.382856   15036 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1226 21:45:52.383408   15036 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1226 21:45:52.387611   15036 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1226 21:45:52.387634   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:52.387853   15036 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1226 21:45:52.387872   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:52.467481   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:45:52.589989   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:45:52.957824   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:52.960100   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:53.096087   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.430391983s)
	I1226 21:45:53.096137   15036 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-989445"
	I1226 21:45:53.098083   15036 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:45:53.099447   15036 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1226 21:45:53.099377   15036 out.go:177] * Verifying csi-hostpath-driver addon...
	I1226 21:45:53.102020   15036 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1226 21:45:53.103769   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1226 21:45:53.104796   15036 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1226 21:45:53.108978   15036 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1226 21:45:53.108995   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:53.174396   15036 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1226 21:45:53.174433   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1226 21:45:53.193480   15036 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1226 21:45:53.193501   15036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1226 21:45:53.213463   15036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1226 21:45:53.386091   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:53.387881   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:53.659571   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:53.960533   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:53.960852   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:54.160388   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:54.176657   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.586612424s)
	I1226 21:45:54.462957   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:54.469338   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:54.477296   15036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.263778654s)
	I1226 21:45:54.478374   15036 addons.go:473] Verifying addon gcp-auth=true in "addons-989445"
	I1226 21:45:54.480021   15036 out.go:177] * Verifying gcp-auth addon...
	I1226 21:45:54.480604   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:45:54.483965   15036 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1226 21:45:54.560421   15036 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1226 21:45:54.560459   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:54.661521   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:54.886828   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:54.888498   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:54.989062   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:55.111571   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:55.386154   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:55.458207   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:55.558843   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:55.659629   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:55.886777   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:55.957053   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:55.988404   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:56.158743   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:56.385528   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:56.386771   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:56.488138   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:56.609748   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:56.886322   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:56.887848   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:56.967419   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:45:56.987508   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:57.109972   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:57.385520   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:57.387472   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:57.487087   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:57.610427   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:57.885422   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:57.888259   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:57.989277   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:58.109049   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:58.385633   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:58.387326   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:58.489674   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:58.613039   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:58.885626   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:58.887614   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:58.987843   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:59.109416   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:59.385274   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:59.387170   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:59.466883   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:45:59.488238   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:45:59.608325   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:45:59.885936   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:45:59.888589   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:45:59.988009   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:00.109732   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:00.385708   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:00.387971   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:00.489356   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:00.608611   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:00.885443   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:00.887745   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:00.988201   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:01.108891   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:01.386596   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:01.387599   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:01.487229   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:01.610959   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:01.886684   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:01.888773   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:01.966469   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:46:01.987198   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:02.110015   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:02.385711   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:02.387284   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:02.487812   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:02.610924   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:02.886427   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:02.888363   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:02.988444   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:03.109914   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:03.385737   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:03.387431   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:03.488164   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:03.608534   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:03.885965   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:03.887951   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:03.967819   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:46:03.987733   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:04.110346   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:04.385728   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:04.387196   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:04.487872   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:04.609089   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:04.885755   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:04.888535   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:04.987740   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:05.109140   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:05.385429   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:05.387674   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:05.487396   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:05.609770   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:05.886280   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:05.888077   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:05.988535   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:06.109852   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:06.385834   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:06.386962   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:06.467444   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:46:06.488394   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:06.608624   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:06.885787   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:06.887494   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:06.987859   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:07.109025   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:07.385479   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:07.387538   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:07.487551   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:07.609777   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:07.886205   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:07.887370   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:07.988189   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:08.110087   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:08.385820   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:08.386880   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:08.488199   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:08.609238   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:08.884846   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:08.887368   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:08.966902   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:46:08.988944   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:09.109749   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:09.385300   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:09.387148   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:09.488074   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:09.608564   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:09.885286   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:09.886954   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:09.988519   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:10.108808   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:10.385356   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:10.388043   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:10.487655   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:10.609435   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:10.884826   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:10.886563   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:10.989284   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:11.108771   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:11.385314   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:11.387463   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:11.467217   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:46:11.488195   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:11.609190   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:11.886077   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:11.887220   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:11.987424   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:12.109125   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:12.387040   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:12.388191   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:12.488131   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:12.609758   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:12.885604   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:12.887919   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:12.987625   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:13.110113   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:13.385777   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:13.387555   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:13.488320   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:13.608772   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:13.885636   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:13.887510   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:13.966464   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:46:13.987928   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:14.109549   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:14.386064   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:14.388223   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:14.487987   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:14.609677   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:14.884928   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:14.887459   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:14.987220   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:15.108854   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:15.386477   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:15.388160   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:15.487638   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:15.610568   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:15.885528   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:15.888103   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:15.967519   15036 node_ready.go:58] node "addons-989445" has status "Ready":"False"
	I1226 21:46:15.989162   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:16.109721   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:16.385897   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:16.387085   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:16.487611   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:16.610513   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:16.884879   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:16.886402   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:16.991629   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:17.109075   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:17.385434   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:17.387227   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:17.487525   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:17.609433   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:17.884742   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:17.887111   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:17.966772   15036 node_ready.go:49] node "addons-989445" has status "Ready":"True"
	I1226 21:46:17.966866   15036 node_ready.go:38] duration metric: took 27.503701763s waiting for node "addons-989445" to be "Ready" ...
	I1226 21:46:17.966890   15036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 21:46:17.988548   15036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tcxqb" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:18.063412   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:18.110564   15036 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1226 21:46:18.110603   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:18.385511   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:18.389529   15036 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1226 21:46:18.389562   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:18.488726   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:18.661332   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:18.887074   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:18.892668   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:18.988490   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:19.111341   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:19.387720   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:19.389371   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:19.488588   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:19.658113   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:19.886478   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:19.888355   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:19.987962   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:19.998049   15036 pod_ready.go:102] pod "coredns-5dd5756b68-tcxqb" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:20.111116   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:20.385355   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:20.388515   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:20.488241   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:20.494608   15036 pod_ready.go:92] pod "coredns-5dd5756b68-tcxqb" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:20.494640   15036 pod_ready.go:81] duration metric: took 2.506061906s waiting for pod "coredns-5dd5756b68-tcxqb" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.494705   15036 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-989445" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.500702   15036 pod_ready.go:92] pod "etcd-addons-989445" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:20.500735   15036 pod_ready.go:81] duration metric: took 6.019502ms waiting for pod "etcd-addons-989445" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.500754   15036 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-989445" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.507042   15036 pod_ready.go:92] pod "kube-apiserver-addons-989445" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:20.507062   15036 pod_ready.go:81] duration metric: took 6.300736ms waiting for pod "kube-apiserver-addons-989445" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.507072   15036 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-989445" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.511360   15036 pod_ready.go:92] pod "kube-controller-manager-addons-989445" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:20.511381   15036 pod_ready.go:81] duration metric: took 4.301943ms waiting for pod "kube-controller-manager-addons-989445" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.511396   15036 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5dxzj" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.515696   15036 pod_ready.go:92] pod "kube-proxy-5dxzj" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:20.515726   15036 pod_ready.go:81] duration metric: took 4.315654ms waiting for pod "kube-proxy-5dxzj" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.515735   15036 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-989445" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.611616   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:20.885733   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:20.887757   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:20.892287   15036 pod_ready.go:92] pod "kube-scheduler-addons-989445" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:20.892310   15036 pod_ready.go:81] duration metric: took 376.568193ms waiting for pod "kube-scheduler-addons-989445" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.892319   15036 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-vrsz7" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:20.987556   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:21.111273   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:21.386908   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:21.389308   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:21.487554   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:21.611691   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:21.886042   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:21.887584   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:21.987448   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:22.109703   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:22.384864   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:22.387774   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:22.488424   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:22.612259   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:22.886725   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:22.889548   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:22.964047   15036 pod_ready.go:102] pod "metrics-server-7c66d45ddc-vrsz7" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:22.987527   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:23.162163   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:23.387805   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:23.390123   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:23.488827   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:23.611910   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:23.885809   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:23.887941   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:23.987965   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:24.112224   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:24.386971   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:24.388272   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:24.487830   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:24.609363   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:24.885630   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:24.887456   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:24.989218   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:25.111040   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:25.385049   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:25.387209   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:25.399558   15036 pod_ready.go:102] pod "metrics-server-7c66d45ddc-vrsz7" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:25.488241   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:25.610473   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:25.887098   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:25.889480   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:25.989671   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:26.161301   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:26.387171   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:26.390649   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:26.488216   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:26.611274   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:26.885159   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:26.887771   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:26.988071   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:27.110495   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:27.386052   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:27.388302   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:27.488019   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:27.612547   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:27.886391   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:27.889482   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:27.898402   15036 pod_ready.go:102] pod "metrics-server-7c66d45ddc-vrsz7" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:27.987537   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:28.112524   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:28.386486   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:28.388797   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:28.488046   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:28.611673   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:28.885508   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:28.888921   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:28.988010   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:29.110632   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:29.385865   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:29.387993   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:29.488269   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:29.610908   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:29.886890   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:29.889580   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:29.898527   15036 pod_ready.go:102] pod "metrics-server-7c66d45ddc-vrsz7" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:29.988021   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:30.161061   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:30.476695   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:30.478257   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:30.564319   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:30.662949   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:30.887648   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:30.889044   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:31.056983   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:31.159519   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:31.460632   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:31.463748   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:31.558032   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:31.660235   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:31.885970   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:31.888383   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:31.899147   15036 pod_ready.go:102] pod "metrics-server-7c66d45ddc-vrsz7" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:31.987996   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:32.112520   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:32.384812   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:32.387477   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:32.489060   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:32.611208   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:32.886791   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:32.888782   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:32.987903   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:33.111933   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:33.385875   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:33.387597   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:33.487294   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:33.610378   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:33.885246   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:33.888471   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:33.900345   15036 pod_ready.go:102] pod "metrics-server-7c66d45ddc-vrsz7" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:33.987456   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:34.111072   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:34.387073   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:34.390108   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:34.489140   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:34.610643   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:34.886166   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:34.890861   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:34.987868   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:35.110480   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:35.385905   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:35.387617   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:35.488904   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:35.659366   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:35.886450   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:35.888919   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:35.960452   15036 pod_ready.go:102] pod "metrics-server-7c66d45ddc-vrsz7" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:35.990031   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:36.159868   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:36.385804   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:36.387758   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:36.487869   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:36.611887   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:36.887267   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:36.889953   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:36.989546   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:37.110235   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:37.388231   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:37.388363   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:37.488608   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:37.611439   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:37.885281   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:37.888034   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:37.989166   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:38.110854   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:38.387576   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:38.390325   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:38.399195   15036 pod_ready.go:92] pod "metrics-server-7c66d45ddc-vrsz7" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:38.399225   15036 pod_ready.go:81] duration metric: took 17.506900087s waiting for pod "metrics-server-7c66d45ddc-vrsz7" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:38.399249   15036 pod_ready.go:38] duration metric: took 20.432297416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 21:46:38.399269   15036 api_server.go:52] waiting for apiserver process to appear ...
	I1226 21:46:38.399330   15036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 21:46:38.469373   15036 api_server.go:72] duration metric: took 53.389616496s to wait for apiserver process to appear ...
	I1226 21:46:38.469398   15036 api_server.go:88] waiting for apiserver healthz status ...
	I1226 21:46:38.469418   15036 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1226 21:46:38.473960   15036 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1226 21:46:38.475260   15036 api_server.go:141] control plane version: v1.28.4
	I1226 21:46:38.475282   15036 api_server.go:131] duration metric: took 5.878455ms to wait for apiserver health ...
	I1226 21:46:38.475290   15036 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 21:46:38.486758   15036 system_pods.go:59] 19 kube-system pods found
	I1226 21:46:38.486794   15036 system_pods.go:61] "coredns-5dd5756b68-tcxqb" [5bd6f178-97cc-4ffb-a120-0e73b1ccf88b] Running
	I1226 21:46:38.486802   15036 system_pods.go:61] "csi-hostpath-attacher-0" [40062881-9162-44c1-bc79-3a8b23f88924] Running
	I1226 21:46:38.486814   15036 system_pods.go:61] "csi-hostpath-resizer-0" [fa52935b-4148-4a76-86a8-489c7302cea5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1226 21:46:38.486825   15036 system_pods.go:61] "csi-hostpathplugin-cg4mt" [7dc8ba32-3e21-4830-b49e-d9b53cb79205] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1226 21:46:38.486837   15036 system_pods.go:61] "etcd-addons-989445" [8cacfba1-6ad8-4641-9b7b-7217e2c13e55] Running
	I1226 21:46:38.486844   15036 system_pods.go:61] "kindnet-qfxn8" [53c7094d-dd50-4864-bd0e-4ed5dbd86a1a] Running
	I1226 21:46:38.486853   15036 system_pods.go:61] "kube-apiserver-addons-989445" [2f07c02c-84f9-4d69-94c1-4dab34d706fd] Running
	I1226 21:46:38.486861   15036 system_pods.go:61] "kube-controller-manager-addons-989445" [c5a840ea-d7df-45b8-8d32-286a20120abe] Running
	I1226 21:46:38.486872   15036 system_pods.go:61] "kube-ingress-dns-minikube" [6144f448-c65a-40bf-a8c8-1a77b903831e] Running
	I1226 21:46:38.486878   15036 system_pods.go:61] "kube-proxy-5dxzj" [ed24aee0-3925-4e3c-8209-f06fd09f37b0] Running
	I1226 21:46:38.486885   15036 system_pods.go:61] "kube-scheduler-addons-989445" [24acc1e0-60f0-407c-aa35-8d234e52e3bd] Running
	I1226 21:46:38.486891   15036 system_pods.go:61] "metrics-server-7c66d45ddc-vrsz7" [738a78bc-e7c3-4b71-b308-ca3539f9358f] Running
	I1226 21:46:38.486914   15036 system_pods.go:61] "nvidia-device-plugin-daemonset-zx8pr" [51833bfc-697b-48b5-bb36-fd13682ccab0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1226 21:46:38.486925   15036 system_pods.go:61] "registry-5qthm" [25ca6ee7-e882-457e-890f-a491cac8bc8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1226 21:46:38.486939   15036 system_pods.go:61] "registry-proxy-kwnsf" [19b23904-1fe8-4f7f-baac-ae30b2300171] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1226 21:46:38.486978   15036 system_pods.go:61] "snapshot-controller-58dbcc7b99-9n4th" [15194399-ebf0-4428-baac-9f944964cc78] Running
	I1226 21:46:38.486990   15036 system_pods.go:61] "snapshot-controller-58dbcc7b99-lmg6h" [06df55df-d3cc-425b-bfc4-89085139a9c8] Running
	I1226 21:46:38.486996   15036 system_pods.go:61] "storage-provisioner" [a11b21ea-a3f1-43db-ade8-89ae79109c14] Running
	I1226 21:46:38.487003   15036 system_pods.go:61] "tiller-deploy-7b677967b9-mtt89" [a1183792-1eb5-4966-bdcc-c273012f4542] Running
	I1226 21:46:38.487011   15036 system_pods.go:74] duration metric: took 11.714257ms to wait for pod list to return data ...
	I1226 21:46:38.487025   15036 default_sa.go:34] waiting for default service account to be created ...
	I1226 21:46:38.489642   15036 default_sa.go:45] found service account: "default"
	I1226 21:46:38.489674   15036 default_sa.go:55] duration metric: took 2.642517ms for default service account to be created ...
	I1226 21:46:38.489685   15036 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 21:46:38.491529   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:38.505778   15036 system_pods.go:86] 19 kube-system pods found
	I1226 21:46:38.505838   15036 system_pods.go:89] "coredns-5dd5756b68-tcxqb" [5bd6f178-97cc-4ffb-a120-0e73b1ccf88b] Running
	I1226 21:46:38.505851   15036 system_pods.go:89] "csi-hostpath-attacher-0" [40062881-9162-44c1-bc79-3a8b23f88924] Running
	I1226 21:46:38.505869   15036 system_pods.go:89] "csi-hostpath-resizer-0" [fa52935b-4148-4a76-86a8-489c7302cea5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1226 21:46:38.505885   15036 system_pods.go:89] "csi-hostpathplugin-cg4mt" [7dc8ba32-3e21-4830-b49e-d9b53cb79205] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1226 21:46:38.505899   15036 system_pods.go:89] "etcd-addons-989445" [8cacfba1-6ad8-4641-9b7b-7217e2c13e55] Running
	I1226 21:46:38.505909   15036 system_pods.go:89] "kindnet-qfxn8" [53c7094d-dd50-4864-bd0e-4ed5dbd86a1a] Running
	I1226 21:46:38.505918   15036 system_pods.go:89] "kube-apiserver-addons-989445" [2f07c02c-84f9-4d69-94c1-4dab34d706fd] Running
	I1226 21:46:38.505926   15036 system_pods.go:89] "kube-controller-manager-addons-989445" [c5a840ea-d7df-45b8-8d32-286a20120abe] Running
	I1226 21:46:38.505943   15036 system_pods.go:89] "kube-ingress-dns-minikube" [6144f448-c65a-40bf-a8c8-1a77b903831e] Running
	I1226 21:46:38.505950   15036 system_pods.go:89] "kube-proxy-5dxzj" [ed24aee0-3925-4e3c-8209-f06fd09f37b0] Running
	I1226 21:46:38.505963   15036 system_pods.go:89] "kube-scheduler-addons-989445" [24acc1e0-60f0-407c-aa35-8d234e52e3bd] Running
	I1226 21:46:38.505971   15036 system_pods.go:89] "metrics-server-7c66d45ddc-vrsz7" [738a78bc-e7c3-4b71-b308-ca3539f9358f] Running
	I1226 21:46:38.505988   15036 system_pods.go:89] "nvidia-device-plugin-daemonset-zx8pr" [51833bfc-697b-48b5-bb36-fd13682ccab0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1226 21:46:38.505999   15036 system_pods.go:89] "registry-5qthm" [25ca6ee7-e882-457e-890f-a491cac8bc8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1226 21:46:38.506069   15036 system_pods.go:89] "registry-proxy-kwnsf" [19b23904-1fe8-4f7f-baac-ae30b2300171] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1226 21:46:38.506089   15036 system_pods.go:89] "snapshot-controller-58dbcc7b99-9n4th" [15194399-ebf0-4428-baac-9f944964cc78] Running
	I1226 21:46:38.506107   15036 system_pods.go:89] "snapshot-controller-58dbcc7b99-lmg6h" [06df55df-d3cc-425b-bfc4-89085139a9c8] Running
	I1226 21:46:38.506124   15036 system_pods.go:89] "storage-provisioner" [a11b21ea-a3f1-43db-ade8-89ae79109c14] Running
	I1226 21:46:38.506162   15036 system_pods.go:89] "tiller-deploy-7b677967b9-mtt89" [a1183792-1eb5-4966-bdcc-c273012f4542] Running
	I1226 21:46:38.506181   15036 system_pods.go:126] duration metric: took 16.48884ms to wait for k8s-apps to be running ...
	I1226 21:46:38.506199   15036 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 21:46:38.506265   15036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 21:46:38.569666   15036 system_svc.go:56] duration metric: took 63.453278ms WaitForService to wait for kubelet.
	I1226 21:46:38.569704   15036 kubeadm.go:581] duration metric: took 53.489951548s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 21:46:38.569742   15036 node_conditions.go:102] verifying NodePressure condition ...
	I1226 21:46:38.573412   15036 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1226 21:46:38.573455   15036 node_conditions.go:123] node cpu capacity is 8
	I1226 21:46:38.573473   15036 node_conditions.go:105] duration metric: took 3.724242ms to run NodePressure ...
	I1226 21:46:38.573489   15036 start.go:228] waiting for startup goroutines ...
	I1226 21:46:38.611494   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:38.885332   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:38.888533   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:38.987785   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:39.111835   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:39.386936   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:39.389531   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:39.489017   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:39.612098   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:39.885881   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:39.888771   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:39.989223   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:40.112395   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:40.386274   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:40.388746   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:40.487613   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:40.611101   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:40.885247   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:40.889088   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:40.987919   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:41.111149   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:41.386364   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:41.388091   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:41.489175   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:41.610009   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:41.885858   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:41.888368   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:41.988367   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:42.114405   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:42.386336   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:42.388088   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:42.487355   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:42.609555   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:42.887012   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:42.888600   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:42.987901   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:43.112760   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:43.386480   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:43.389389   15036 kapi.go:107] duration metric: took 51.005976931s to wait for kubernetes.io/minikube-addons=registry ...
	I1226 21:46:43.488847   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:43.610944   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:43.886643   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:43.987748   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:44.110598   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:44.385651   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:44.488798   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:44.610321   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:44.885756   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:44.988224   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:45.109927   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:45.385279   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:45.487771   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:45.611754   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:45.886687   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:45.989497   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:46.111054   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:46.385878   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:46.489004   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:46.610552   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:46.885361   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:46.988174   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:47.109994   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:47.386235   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:47.487538   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:47.656535   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:47.956744   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:48.060438   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:48.168250   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:48.470520   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:48.558736   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:48.663115   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:48.886570   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:48.987648   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:49.110875   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:49.386182   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:49.488423   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:49.659855   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:49.886189   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:49.988466   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:50.159633   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:50.386975   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:50.487714   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:50.661432   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:50.885054   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:50.988589   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:51.114058   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:51.385548   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:51.488615   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:51.610807   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:51.886591   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:51.988493   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:52.112649   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:52.385697   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:52.489168   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:52.610726   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:52.886202   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:52.988621   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:53.110281   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:53.385159   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:53.488095   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:53.611989   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:53.885736   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:53.986945   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:54.111819   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:54.384750   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:54.487517   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:54.661247   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:54.885631   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:54.989836   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:55.161062   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:55.386645   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:55.487621   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:55.661161   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:55.886429   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:55.987845   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:56.112707   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:56.386864   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:56.488731   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:56.611361   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:56.885614   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:56.988674   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:57.111355   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:57.384981   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:57.487589   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:57.610133   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:57.886187   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:57.988805   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:58.111367   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:58.386367   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:58.488213   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:58.662092   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:58.886251   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:58.987953   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:59.111610   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:59.667715   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:59.668988   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:59.669470   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:59.959527   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:00.059399   15036 kapi.go:107] duration metric: took 1m5.575422163s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1226 21:47:00.061714   15036 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-989445 cluster.
	I1226 21:47:00.063802   15036 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1226 21:47:00.065768   15036 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1226 21:47:00.162700   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:00.478143   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:00.660773   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:00.886583   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:01.161081   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:01.387292   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:01.611190   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:01.887222   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:02.110859   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:02.385804   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:02.611054   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:02.887337   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:03.112157   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:03.386394   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:03.610838   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:03.885971   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:04.110785   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:04.386360   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:04.610261   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:04.888023   15036 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:05.111762   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:05.386881   15036 kapi.go:107] duration metric: took 1m13.00775896s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1226 21:47:05.609560   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:06.110792   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:06.663079   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:07.111067   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:07.613105   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:08.110100   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:08.611779   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:09.109651   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:09.611387   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:10.111227   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:10.610810   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:11.110859   15036 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:11.610531   15036 kapi.go:107] duration metric: took 1m18.50573829s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1226 21:47:11.612758   15036 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, helm-tiller, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1226 21:47:11.614242   15036 addons.go:508] enable addons completed in 1m27.044427455s: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner helm-tiller inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1226 21:47:11.614276   15036 start.go:233] waiting for cluster config update ...
	I1226 21:47:11.614294   15036 start.go:242] writing updated cluster config ...
	I1226 21:47:11.615513   15036 ssh_runner.go:195] Run: rm -f paused
	I1226 21:47:11.664906   15036 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1226 21:47:11.667054   15036 out.go:177] * Done! kubectl is now configured to use "addons-989445" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 26 21:49:54 addons-989445 crio[953]: time="2023-12-26 21:49:54.404424813Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=69987fd8-a513-45fd-a5e5-fe3ae67053bc name=/runtime.v1.ImageService/PullImage
	Dec 26 21:49:54 addons-989445 crio[953]: time="2023-12-26 21:49:54.405226413Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=3b3ad46a-e04b-4eef-a7e0-44b4d6cecf42 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:49:54 addons-989445 crio[953]: time="2023-12-26 21:49:54.406679726Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=3b3ad46a-e04b-4eef-a7e0-44b4d6cecf42 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:49:54 addons-989445 crio[953]: time="2023-12-26 21:49:54.408058596Z" level=info msg="Creating container: default/hello-world-app-5d77478584-v6dmg/hello-world-app" id=af3f655d-a851-4509-bc62-22badaa75185 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 26 21:49:54 addons-989445 crio[953]: time="2023-12-26 21:49:54.408188529Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 26 21:49:54 addons-989445 crio[953]: time="2023-12-26 21:49:54.481547991Z" level=info msg="Created container 0eb6178cf9dc501252d4a8d1b02e7182d1482b64db95edf43a7a37b699b1ede4: default/hello-world-app-5d77478584-v6dmg/hello-world-app" id=af3f655d-a851-4509-bc62-22badaa75185 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 26 21:49:54 addons-989445 crio[953]: time="2023-12-26 21:49:54.482277907Z" level=info msg="Starting container: 0eb6178cf9dc501252d4a8d1b02e7182d1482b64db95edf43a7a37b699b1ede4" id=b9fc19d2-cac9-41be-92fc-79bfaf8cf585 name=/runtime.v1.RuntimeService/StartContainer
	Dec 26 21:49:54 addons-989445 crio[953]: time="2023-12-26 21:49:54.490009927Z" level=info msg="Started container" PID=9853 containerID=0eb6178cf9dc501252d4a8d1b02e7182d1482b64db95edf43a7a37b699b1ede4 description=default/hello-world-app-5d77478584-v6dmg/hello-world-app id=b9fc19d2-cac9-41be-92fc-79bfaf8cf585 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a39e67a81660e3c2a0f1074168905feb437d0a3b293698e731115ce5919fb948
	Dec 26 21:49:54 addons-989445 crio[953]: time="2023-12-26 21:49:54.941660945Z" level=info msg="Removing container: dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0" id=f90e6c00-241a-4663-a5ef-315c9c581d2a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 26 21:49:54 addons-989445 crio[953]: time="2023-12-26 21:49:54.958019806Z" level=info msg="Removed container dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=f90e6c00-241a-4663-a5ef-315c9c581d2a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 26 21:49:56 addons-989445 crio[953]: time="2023-12-26 21:49:56.506322601Z" level=info msg="Stopping container: 325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79 (timeout: 2s)" id=0103c232-23a2-4284-8f8f-7e9774cb6901 name=/runtime.v1.RuntimeService/StopContainer
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.513347647Z" level=warning msg="Stopping container 325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=0103c232-23a2-4284-8f8f-7e9774cb6901 name=/runtime.v1.RuntimeService/StopContainer
	Dec 26 21:49:58 addons-989445 conmon[5902]: conmon 325aa56d07c2dd80911b <ninfo>: container 5914 exited with status 137
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.646334474Z" level=info msg="Stopped container 325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79: ingress-nginx/ingress-nginx-controller-69cff4fd79-hm88b/controller" id=0103c232-23a2-4284-8f8f-7e9774cb6901 name=/runtime.v1.RuntimeService/StopContainer
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.646861367Z" level=info msg="Stopping pod sandbox: 06aee47778ffa711eb3fd57fc734ce95ec697787e9e757b431d6ea934f00d922" id=481780a3-a404-407d-ba79-d78ccc3dfe9f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.649728515Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-G2FZSGJVRUKF4OPR - [0:0]\n:KUBE-HP-7C7ZNP25XL72KJN5 - [0:0]\n-X KUBE-HP-G2FZSGJVRUKF4OPR\n-X KUBE-HP-7C7ZNP25XL72KJN5\nCOMMIT\n"
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.651152062Z" level=info msg="Closing host port tcp:80"
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.651216510Z" level=info msg="Closing host port tcp:443"
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.652768120Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.652799685Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.652977888Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-hm88b Namespace:ingress-nginx ID:06aee47778ffa711eb3fd57fc734ce95ec697787e9e757b431d6ea934f00d922 UID:ed4bf112-548e-4555-8bda-0a99f0ef89ec NetNS:/var/run/netns/580112b7-c880-4711-841b-e1a960373c98 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.653166083Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-hm88b from CNI network \"kindnet\" (type=ptp)"
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.692150439Z" level=info msg="Stopped pod sandbox: 06aee47778ffa711eb3fd57fc734ce95ec697787e9e757b431d6ea934f00d922" id=481780a3-a404-407d-ba79-d78ccc3dfe9f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.953916410Z" level=info msg="Removing container: 325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79" id=5560f7d9-b58e-4044-b670-397dadc1364d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 26 21:49:58 addons-989445 crio[953]: time="2023-12-26 21:49:58.968259451Z" level=info msg="Removed container 325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79: ingress-nginx/ingress-nginx-controller-69cff4fd79-hm88b/controller" id=5560f7d9-b58e-4044-b670-397dadc1364d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0eb6178cf9dc5       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      9 seconds ago       Running             hello-world-app           0                   a39e67a81660e       hello-world-app-5d77478584-v6dmg
	e58c9903995ad       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   240c42a7c5fd1       nginx
	c43a9d9317b56       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   a1dfa75faea75       headlamp-7ddfbb94ff-f54vg
	36f3bba26af80       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   d651f50d0fe57       gcp-auth-d4c87556c-72sgh
	06ddb68d29129       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   b222c1befc110       local-path-provisioner-78b46b4d5c-rwrfk
	8aff16a694b83       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     1                   f49469769d6ce       ingress-nginx-admission-patch-gcvld
	81fe23899d2b7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   2fd659854cee6       ingress-nginx-admission-create-nkk9f
	9c9f6e4c6eb5c       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   a2b6aae077f84       yakd-dashboard-9947fc6bf-wn7nr
	68d2c747b70af       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   6529df4b38caa       coredns-5dd5756b68-tcxqb
	ef87731137515       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   ec913cc44a2a2       storage-provisioner
	f6e2deea3b7ee       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   e44f3a81bf117       kube-proxy-5dxzj
	c240f9c195e75       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   b50b39c47424e       kindnet-qfxn8
	d5afcafd55d2c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   f5e82af22cc27       etcd-addons-989445
	e017559d74d7e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   0b9a4b72af058       kube-apiserver-addons-989445
	06de6e98a4c0c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   ec7a3a7e7cdf5       kube-scheduler-addons-989445
	aff48b560e5ff       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   d785aae188c9d       kube-controller-manager-addons-989445
	
	
	==> coredns [68d2c747b70af91cc1c60481ba3bfd5ec7d1db245b2b4805ea44f83d0dfd4abf] <==
	[INFO] 10.244.0.8:50896 - 30548 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063044s
	[INFO] 10.244.0.8:55190 - 27344 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003929533s
	[INFO] 10.244.0.8:55190 - 7381 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004235283s
	[INFO] 10.244.0.8:60568 - 20551 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004525715s
	[INFO] 10.244.0.8:60568 - 56901 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004978759s
	[INFO] 10.244.0.8:44172 - 18660 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004606919s
	[INFO] 10.244.0.8:44172 - 3298 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004698013s
	[INFO] 10.244.0.8:59500 - 8635 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000054428s
	[INFO] 10.244.0.8:59500 - 1980 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000117257s
	[INFO] 10.244.0.20:38389 - 24449 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000349615s
	[INFO] 10.244.0.20:36680 - 34226 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000445702s
	[INFO] 10.244.0.20:48030 - 49942 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154002s
	[INFO] 10.244.0.20:38977 - 11721 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00020196s
	[INFO] 10.244.0.20:52467 - 21537 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00011182s
	[INFO] 10.244.0.20:59600 - 12585 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000201878s
	[INFO] 10.244.0.20:41422 - 63690 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006952506s
	[INFO] 10.244.0.20:35150 - 20592 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007105396s
	[INFO] 10.244.0.20:52356 - 39569 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004094069s
	[INFO] 10.244.0.20:37893 - 20794 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004930295s
	[INFO] 10.244.0.20:41601 - 14040 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004728177s
	[INFO] 10.244.0.20:46961 - 43582 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004927564s
	[INFO] 10.244.0.20:49269 - 42961 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000710471s
	[INFO] 10.244.0.20:37234 - 28951 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000835911s
	[INFO] 10.244.0.25:49617 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000390462s
	[INFO] 10.244.0.25:44278 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00020209s
	
	
	==> describe nodes <==
	Name:               addons-989445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-989445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=addons-989445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T21_45_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-989445
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 21:45:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-989445
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 21:49:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 21:48:34 +0000   Tue, 26 Dec 2023 21:45:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 21:48:34 +0000   Tue, 26 Dec 2023 21:45:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 21:48:34 +0000   Tue, 26 Dec 2023 21:45:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 21:48:34 +0000   Tue, 26 Dec 2023 21:46:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-989445
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 d5bed9dcec744bb9b3b9b90b60cd29d6
	  System UUID:                cfec3e45-8a79-4208-beab-4897d59ed0fb
	  Boot ID:                    86db03b9-ef11-43ea-be40-040b33a40e54
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-v6dmg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-d4c87556c-72sgh                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  headlamp                    headlamp-7ddfbb94ff-f54vg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 coredns-5dd5756b68-tcxqb                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m19s
	  kube-system                 etcd-addons-989445                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m32s
	  kube-system                 kindnet-qfxn8                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m20s
	  kube-system                 kube-apiserver-addons-989445               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-controller-manager-addons-989445      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-proxy-5dxzj                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-scheduler-addons-989445               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  local-path-storage          local-path-provisioner-78b46b4d5c-rwrfk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-wn7nr             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m14s  kube-proxy       
	  Normal  Starting                 4m32s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m32s  kubelet          Node addons-989445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s  kubelet          Node addons-989445 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s  kubelet          Node addons-989445 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m20s  node-controller  Node addons-989445 event: Registered Node addons-989445 in Controller
	  Normal  NodeReady                3m46s  kubelet          Node addons-989445 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.009010] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004783] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000842] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000887] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001001] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000867] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000767] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000795] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 8
	[ +10.106543] kauditd_printk_skb: 36 callbacks suppressed
	[Dec26 21:47] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 9b 1d fe 84 3e 36 6a aa 14 12 d3 08 00
	[  +1.029564] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 9b 1d fe 84 3e 36 6a aa 14 12 d3 08 00
	[  +2.019820] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 9b 1d fe 84 3e 36 6a aa 14 12 d3 08 00
	[  +4.223510] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a 9b 1d fe 84 3e 36 6a aa 14 12 d3 08 00
	[  +8.187162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 9b 1d fe 84 3e 36 6a aa 14 12 d3 08 00
	[Dec26 21:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 9b 1d fe 84 3e 36 6a aa 14 12 d3 08 00
	[ +33.788538] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 9b 1d fe 84 3e 36 6a aa 14 12 d3 08 00
	
	
	==> etcd [d5afcafd55d2c5b7178ecf77a6d3d5c8c07534d575815a8de0ab29b4f20dec14] <==
	{"level":"info","ts":"2023-12-26T21:45:49.156423Z","caller":"traceutil/trace.go:171","msg":"trace[796255564] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:411; }","duration":"176.758588ms","start":"2023-12-26T21:45:48.979637Z","end":"2023-12-26T21:45:49.156396Z","steps":["trace[796255564] 'agreement among raft nodes before linearized reading'  (duration: 176.564764ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-26T21:45:49.156601Z","caller":"traceutil/trace.go:171","msg":"trace[873653432] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"176.515888ms","start":"2023-12-26T21:45:48.980029Z","end":"2023-12-26T21:45:49.156545Z","steps":["trace[873653432] 'process raft request'  (duration: 176.131844ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-26T21:45:49.157044Z","caller":"traceutil/trace.go:171","msg":"trace[1285975550] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"177.186608ms","start":"2023-12-26T21:45:48.97984Z","end":"2023-12-26T21:45:49.157026Z","steps":["trace[1285975550] 'process raft request'  (duration: 176.187142ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:45:50.961327Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.581509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T21:45:50.961828Z","caller":"traceutil/trace.go:171","msg":"trace[954499191] range","detail":"{range_begin:/registry/services/specs/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:510; }","duration":"100.82481ms","start":"2023-12-26T21:45:50.860709Z","end":"2023-12-26T21:45:50.961533Z","steps":["trace[954499191] 'agreement among raft nodes before linearized reading'  (duration: 100.553188ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:59.664006Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.778861ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128026081064494984 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kube-node-lease/gcp-auth\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-node-lease/gcp-auth\" value_size:4390 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-12-26T21:46:59.664108Z","caller":"traceutil/trace.go:171","msg":"trace[1867206830] linearizableReadLoop","detail":"{readStateIndex:1179; appliedIndex:1178; }","duration":"297.181616ms","start":"2023-12-26T21:46:59.366908Z","end":"2023-12-26T21:46:59.66409Z","steps":["trace[1867206830] 'read index received'  (duration: 139.255682ms)","trace[1867206830] 'applied index is now lower than readState.Index'  (duration: 157.924849ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-26T21:46:59.664169Z","caller":"traceutil/trace.go:171","msg":"trace[219496069] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"297.749188ms","start":"2023-12-26T21:46:59.366398Z","end":"2023-12-26T21:46:59.664148Z","steps":["trace[219496069] 'process raft request'  (duration: 139.765302ms)","trace[219496069] 'compare'  (duration: 157.588915ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:46:59.664284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.017912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T21:46:59.664359Z","caller":"traceutil/trace.go:171","msg":"trace[283724278] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1143; }","duration":"289.099172ms","start":"2023-12-26T21:46:59.375243Z","end":"2023-12-26T21:46:59.664342Z","steps":["trace[283724278] 'agreement among raft nodes before linearized reading'  (duration: 288.984752ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:59.66437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.890623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-12-26T21:46:59.664423Z","caller":"traceutil/trace.go:171","msg":"trace[1836166268] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1143; }","duration":"249.938661ms","start":"2023-12-26T21:46:59.414475Z","end":"2023-12-26T21:46:59.664413Z","steps":["trace[1836166268] 'agreement among raft nodes before linearized reading'  (duration: 249.852392ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:59.664303Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.514594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11683"}
	{"level":"info","ts":"2023-12-26T21:46:59.665637Z","caller":"traceutil/trace.go:171","msg":"trace[751400693] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1143; }","duration":"179.84457ms","start":"2023-12-26T21:46:59.485779Z","end":"2023-12-26T21:46:59.665624Z","steps":["trace[751400693] 'agreement among raft nodes before linearized reading'  (duration: 178.474399ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:59.66424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.361223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-d4c87556c-72sgh\" ","response":"range_response_count:1 size:4394"}
	{"level":"info","ts":"2023-12-26T21:46:59.665781Z","caller":"traceutil/trace.go:171","msg":"trace[1772117596] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-d4c87556c-72sgh; range_end:; response_count:1; response_revision:1143; }","duration":"298.906118ms","start":"2023-12-26T21:46:59.366864Z","end":"2023-12-26T21:46:59.66577Z","steps":["trace[1772117596] 'agreement among raft nodes before linearized reading'  (duration: 297.300426ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:59.664704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.011441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14599"}
	{"level":"info","ts":"2023-12-26T21:46:59.665859Z","caller":"traceutil/trace.go:171","msg":"trace[1520156350] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1143; }","duration":"283.166045ms","start":"2023-12-26T21:46:59.382682Z","end":"2023-12-26T21:46:59.665848Z","steps":["trace[1520156350] 'agreement among raft nodes before linearized reading'  (duration: 281.969381ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-26T21:46:59.794691Z","caller":"traceutil/trace.go:171","msg":"trace[84708320] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"122.562427ms","start":"2023-12-26T21:46:59.672046Z","end":"2023-12-26T21:46:59.794609Z","steps":["trace[84708320] 'process raft request'  (duration: 119.406502ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-26T21:47:21.872505Z","caller":"traceutil/trace.go:171","msg":"trace[1619489101] linearizableReadLoop","detail":"{readStateIndex:1350; appliedIndex:1349; }","duration":"123.06266ms","start":"2023-12-26T21:47:21.749416Z","end":"2023-12-26T21:47:21.872478Z","steps":["trace[1619489101] 'read index received'  (duration: 60.448506ms)","trace[1619489101] 'applied index is now lower than readState.Index'  (duration: 62.613164ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-26T21:47:21.872571Z","caller":"traceutil/trace.go:171","msg":"trace[2115286996] transaction","detail":"{read_only:false; response_revision:1305; number_of_response:1; }","duration":"135.59565ms","start":"2023-12-26T21:47:21.736944Z","end":"2023-12-26T21:47:21.872539Z","steps":["trace[2115286996] 'process raft request'  (duration: 72.932828ms)","trace[2115286996] 'compare'  (duration: 62.467303ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:47:21.872659Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.224465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" ","response":"range_response_count:1 size:228"}
	{"level":"info","ts":"2023-12-26T21:47:21.872718Z","caller":"traceutil/trace.go:171","msg":"trace[947109241] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pvc-protection-controller; range_end:; response_count:1; response_revision:1305; }","duration":"123.31054ms","start":"2023-12-26T21:47:21.749392Z","end":"2023-12-26T21:47:21.872702Z","steps":["trace[947109241] 'agreement among raft nodes before linearized reading'  (duration: 123.115851ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:47:21.872666Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.264204ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1849"}
	{"level":"info","ts":"2023-12-26T21:47:21.872776Z","caller":"traceutil/trace.go:171","msg":"trace[2055152873] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1305; }","duration":"123.381198ms","start":"2023-12-26T21:47:21.749381Z","end":"2023-12-26T21:47:21.872762Z","steps":["trace[2055152873] 'agreement among raft nodes before linearized reading'  (duration: 123.213618ms)"],"step_count":1}
	
	
	==> gcp-auth [36f3bba26af8095ca38685225349b209f2f356572e6a6c4de51d756930dda4df] <==
	2023/12/26 21:46:59 GCP Auth Webhook started!
	2023/12/26 21:47:11 Ready to marshal response ...
	2023/12/26 21:47:11 Ready to write response ...
	2023/12/26 21:47:12 Ready to marshal response ...
	2023/12/26 21:47:12 Ready to write response ...
	2023/12/26 21:47:22 Ready to marshal response ...
	2023/12/26 21:47:22 Ready to write response ...
	2023/12/26 21:47:22 Ready to marshal response ...
	2023/12/26 21:47:22 Ready to write response ...
	2023/12/26 21:47:23 Ready to marshal response ...
	2023/12/26 21:47:23 Ready to write response ...
	2023/12/26 21:47:23 Ready to marshal response ...
	2023/12/26 21:47:23 Ready to write response ...
	2023/12/26 21:47:23 Ready to marshal response ...
	2023/12/26 21:47:23 Ready to write response ...
	2023/12/26 21:47:33 Ready to marshal response ...
	2023/12/26 21:47:33 Ready to write response ...
	2023/12/26 21:47:39 Ready to marshal response ...
	2023/12/26 21:47:39 Ready to write response ...
	2023/12/26 21:48:18 Ready to marshal response ...
	2023/12/26 21:48:18 Ready to write response ...
	2023/12/26 21:48:33 Ready to marshal response ...
	2023/12/26 21:48:33 Ready to write response ...
	2023/12/26 21:49:53 Ready to marshal response ...
	2023/12/26 21:49:53 Ready to write response ...
	
	
	==> kernel <==
	 21:50:03 up 32 min,  0 users,  load average: 0.27, 0.65, 0.35
	Linux addons-989445 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [c240f9c195e75e9a4153a144971175c59f19b9ffd8e8adc6e33ad073dc6536d6] <==
	I1226 21:47:57.933976       1 main.go:227] handling current node
	I1226 21:48:07.946535       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:48:07.946567       1 main.go:227] handling current node
	I1226 21:48:17.950776       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:48:17.950800       1 main.go:227] handling current node
	I1226 21:48:27.960788       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:48:27.960813       1 main.go:227] handling current node
	I1226 21:48:37.964870       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:48:37.964896       1 main.go:227] handling current node
	I1226 21:48:47.977250       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:48:47.977272       1 main.go:227] handling current node
	I1226 21:48:57.983639       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:48:57.983677       1 main.go:227] handling current node
	I1226 21:49:07.987179       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:49:07.987205       1 main.go:227] handling current node
	I1226 21:49:17.991416       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:49:17.991439       1 main.go:227] handling current node
	I1226 21:49:28.002513       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:49:28.002538       1 main.go:227] handling current node
	I1226 21:49:38.013670       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:49:38.013691       1 main.go:227] handling current node
	I1226 21:49:48.026207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:49:48.026234       1 main.go:227] handling current node
	I1226 21:49:58.030233       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:49:58.030256       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e017559d74d7ed4eb5e23fd30b4859e0e2a92c0e95d3a877aa73f70176f1565b] <==
	I1226 21:47:33.693483       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.250.58"}
	I1226 21:47:39.330892       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1226 21:47:42.395393       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:49104: read: connection reset by peer
	I1226 21:48:30.113027       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1226 21:48:49.132620       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:48:49.132671       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:48:49.139839       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:48:49.139899       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:48:49.147427       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:48:49.147581       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:48:49.148306       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:48:49.148355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:48:49.158597       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:48:49.158807       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:48:49.162900       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:48:49.162959       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:48:49.170160       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:48:49.170231       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:48:49.175394       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:48:49.175430       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1226 21:48:50.148931       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1226 21:48:50.176328       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1226 21:48:50.179142       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1226 21:49:53.280007       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.88.43"}
	E1226 21:49:55.661727       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [aff48b560e5ff6793c9e1d2a087b5bd6acd144f75148784f054cbbb792564c86] <==
	I1226 21:49:13.979873       1 shared_informer.go:318] Caches are synced for resource quota
	I1226 21:49:14.311524       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1226 21:49:14.311582       1 shared_informer.go:318] Caches are synced for garbage collector
	W1226 21:49:18.888857       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:49:18.888888       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:49:21.455943       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:49:21.455982       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:49:25.734286       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:49:25.734316       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:49:46.577625       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:49:46.577651       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:49:47.560016       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:49:47.560046       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1226 21:49:53.106719       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1226 21:49:53.115057       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-v6dmg"
	I1226 21:49:53.120536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.054308ms"
	I1226 21:49:53.133139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.540234ms"
	I1226 21:49:53.133264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="67.617µs"
	I1226 21:49:54.962988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.781729ms"
	I1226 21:49:54.963173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="52.565µs"
	I1226 21:49:55.495124       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1226 21:49:55.495741       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="4.994µs"
	I1226 21:49:55.499481       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1226 21:49:57.620890       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:49:57.620923       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [f6e2deea3b7ee3a0936a62a4b25e8e50b43ee044fdec4bffd9c9dd4298e8aad2] <==
	I1226 21:45:48.080904       1 server_others.go:69] "Using iptables proxy"
	I1226 21:45:48.656215       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1226 21:45:49.360888       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1226 21:45:49.468458       1 server_others.go:152] "Using iptables Proxier"
	I1226 21:45:49.554849       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1226 21:45:49.554890       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1226 21:45:49.554987       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 21:45:49.555249       1 server.go:846] "Version info" version="v1.28.4"
	I1226 21:45:49.555262       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 21:45:49.575168       1 config.go:315] "Starting node config controller"
	I1226 21:45:49.575277       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 21:45:49.575371       1 config.go:97] "Starting endpoint slice config controller"
	I1226 21:45:49.575399       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 21:45:49.575435       1 config.go:188] "Starting service config controller"
	I1226 21:45:49.575459       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 21:45:49.676615       1 shared_informer.go:318] Caches are synced for service config
	I1226 21:45:49.676699       1 shared_informer.go:318] Caches are synced for node config
	I1226 21:45:49.676783       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [06de6e98a4c0c0cf7e49ef396177362ea1b62f5b2ef71e62b55d584d64cfb571] <==
	E1226 21:45:28.661487       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1226 21:45:28.661384       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1226 21:45:28.661432       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1226 21:45:28.661592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1226 21:45:28.661627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1226 21:45:28.661647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1226 21:45:28.661661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1226 21:45:28.661742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1226 21:45:28.661807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 21:45:28.661831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1226 21:45:28.662169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1226 21:45:28.662196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1226 21:45:28.662221       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 21:45:28.662258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1226 21:45:28.662313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1226 21:45:28.662372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1226 21:45:29.501284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1226 21:45:29.501340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1226 21:45:29.599341       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1226 21:45:29.599387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1226 21:45:29.635163       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 21:45:29.635203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1226 21:45:29.698165       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 21:45:29.698203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1226 21:45:30.081793       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 26 21:49:53 addons-989445 kubelet[1556]: I1226 21:49:53.223695    1556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9c1b2c84-3067-4565-9eee-fb0d8bd0d10a-gcp-creds\") pod \"hello-world-app-5d77478584-v6dmg\" (UID: \"9c1b2c84-3067-4565-9eee-fb0d8bd0d10a\") " pod="default/hello-world-app-5d77478584-v6dmg"
	Dec 26 21:49:53 addons-989445 kubelet[1556]: I1226 21:49:53.223776    1556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9gt\" (UniqueName: \"kubernetes.io/projected/9c1b2c84-3067-4565-9eee-fb0d8bd0d10a-kube-api-access-fs9gt\") pod \"hello-world-app-5d77478584-v6dmg\" (UID: \"9c1b2c84-3067-4565-9eee-fb0d8bd0d10a\") " pod="default/hello-world-app-5d77478584-v6dmg"
	Dec 26 21:49:53 addons-989445 kubelet[1556]: W1226 21:49:53.587940    1556 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5a299fb3436b49784279acf8660926c349906abab9a80a64bc80ce47a0e77806/crio-a39e67a81660e3c2a0f1074168905feb437d0a3b293698e731115ce5919fb948 WatchSource:0}: Error finding container a39e67a81660e3c2a0f1074168905feb437d0a3b293698e731115ce5919fb948: Status 404 returned error can't find the container with id a39e67a81660e3c2a0f1074168905feb437d0a3b293698e731115ce5919fb948
	Dec 26 21:49:54 addons-989445 kubelet[1556]: I1226 21:49:54.563745    1556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr6zb\" (UniqueName: \"kubernetes.io/projected/6144f448-c65a-40bf-a8c8-1a77b903831e-kube-api-access-mr6zb\") pod \"6144f448-c65a-40bf-a8c8-1a77b903831e\" (UID: \"6144f448-c65a-40bf-a8c8-1a77b903831e\") "
	Dec 26 21:49:54 addons-989445 kubelet[1556]: I1226 21:49:54.565586    1556 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6144f448-c65a-40bf-a8c8-1a77b903831e-kube-api-access-mr6zb" (OuterVolumeSpecName: "kube-api-access-mr6zb") pod "6144f448-c65a-40bf-a8c8-1a77b903831e" (UID: "6144f448-c65a-40bf-a8c8-1a77b903831e"). InnerVolumeSpecName "kube-api-access-mr6zb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 26 21:49:54 addons-989445 kubelet[1556]: I1226 21:49:54.664635    1556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mr6zb\" (UniqueName: \"kubernetes.io/projected/6144f448-c65a-40bf-a8c8-1a77b903831e-kube-api-access-mr6zb\") on node \"addons-989445\" DevicePath \"\""
	Dec 26 21:49:54 addons-989445 kubelet[1556]: I1226 21:49:54.940460    1556 scope.go:117] "RemoveContainer" containerID="dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0"
	Dec 26 21:49:54 addons-989445 kubelet[1556]: I1226 21:49:54.958372    1556 scope.go:117] "RemoveContainer" containerID="dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0"
	Dec 26 21:49:54 addons-989445 kubelet[1556]: E1226 21:49:54.958873    1556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0\": container with ID starting with dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0 not found: ID does not exist" containerID="dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0"
	Dec 26 21:49:54 addons-989445 kubelet[1556]: I1226 21:49:54.958941    1556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0"} err="failed to get container status \"dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0\": rpc error: code = NotFound desc = could not find container \"dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0\": container with ID starting with dffe9457a7ba1b8d013e1eb45683d55b63b7a56a5c50b043890750c41c515af0 not found: ID does not exist"
	Dec 26 21:49:54 addons-989445 kubelet[1556]: I1226 21:49:54.966400    1556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-v6dmg" podStartSLOduration=1.152928487 podCreationTimestamp="2023-12-26 21:49:53 +0000 UTC" firstStartedPulling="2023-12-26 21:49:53.591307247 +0000 UTC m=+262.272486304" lastFinishedPulling="2023-12-26 21:49:54.404704206 +0000 UTC m=+263.085883249" observedRunningTime="2023-12-26 21:49:54.956527892 +0000 UTC m=+263.637706970" watchObservedRunningTime="2023-12-26 21:49:54.966325432 +0000 UTC m=+263.647504544"
	Dec 26 21:49:55 addons-989445 kubelet[1556]: I1226 21:49:55.467188    1556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6144f448-c65a-40bf-a8c8-1a77b903831e" path="/var/lib/kubelet/pods/6144f448-c65a-40bf-a8c8-1a77b903831e/volumes"
	Dec 26 21:49:57 addons-989445 kubelet[1556]: I1226 21:49:57.466929    1556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="93968be3-0a5b-4439-adaa-f4fadf26eeba" path="/var/lib/kubelet/pods/93968be3-0a5b-4439-adaa-f4fadf26eeba/volumes"
	Dec 26 21:49:57 addons-989445 kubelet[1556]: I1226 21:49:57.467252    1556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b2d45f57-7ed5-4daf-bc0e-a1d41d3c3973" path="/var/lib/kubelet/pods/b2d45f57-7ed5-4daf-bc0e-a1d41d3c3973/volumes"
	Dec 26 21:49:58 addons-989445 kubelet[1556]: I1226 21:49:58.893723    1556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed4bf112-548e-4555-8bda-0a99f0ef89ec-webhook-cert\") pod \"ed4bf112-548e-4555-8bda-0a99f0ef89ec\" (UID: \"ed4bf112-548e-4555-8bda-0a99f0ef89ec\") "
	Dec 26 21:49:58 addons-989445 kubelet[1556]: I1226 21:49:58.893797    1556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d25bw\" (UniqueName: \"kubernetes.io/projected/ed4bf112-548e-4555-8bda-0a99f0ef89ec-kube-api-access-d25bw\") pod \"ed4bf112-548e-4555-8bda-0a99f0ef89ec\" (UID: \"ed4bf112-548e-4555-8bda-0a99f0ef89ec\") "
	Dec 26 21:49:58 addons-989445 kubelet[1556]: I1226 21:49:58.895646    1556 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed4bf112-548e-4555-8bda-0a99f0ef89ec-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ed4bf112-548e-4555-8bda-0a99f0ef89ec" (UID: "ed4bf112-548e-4555-8bda-0a99f0ef89ec"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 26 21:49:58 addons-989445 kubelet[1556]: I1226 21:49:58.895797    1556 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4bf112-548e-4555-8bda-0a99f0ef89ec-kube-api-access-d25bw" (OuterVolumeSpecName: "kube-api-access-d25bw") pod "ed4bf112-548e-4555-8bda-0a99f0ef89ec" (UID: "ed4bf112-548e-4555-8bda-0a99f0ef89ec"). InnerVolumeSpecName "kube-api-access-d25bw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 26 21:49:58 addons-989445 kubelet[1556]: I1226 21:49:58.952963    1556 scope.go:117] "RemoveContainer" containerID="325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79"
	Dec 26 21:49:58 addons-989445 kubelet[1556]: I1226 21:49:58.968633    1556 scope.go:117] "RemoveContainer" containerID="325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79"
	Dec 26 21:49:58 addons-989445 kubelet[1556]: E1226 21:49:58.969020    1556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79\": container with ID starting with 325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79 not found: ID does not exist" containerID="325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79"
	Dec 26 21:49:58 addons-989445 kubelet[1556]: I1226 21:49:58.969073    1556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79"} err="failed to get container status \"325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79\": rpc error: code = NotFound desc = could not find container \"325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79\": container with ID starting with 325aa56d07c2dd80911bba5e6a75f43f67edebd5419fbebce6b8222359dd8e79 not found: ID does not exist"
	Dec 26 21:49:58 addons-989445 kubelet[1556]: I1226 21:49:58.994390    1556 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed4bf112-548e-4555-8bda-0a99f0ef89ec-webhook-cert\") on node \"addons-989445\" DevicePath \"\""
	Dec 26 21:49:58 addons-989445 kubelet[1556]: I1226 21:49:58.994429    1556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d25bw\" (UniqueName: \"kubernetes.io/projected/ed4bf112-548e-4555-8bda-0a99f0ef89ec-kube-api-access-d25bw\") on node \"addons-989445\" DevicePath \"\""
	Dec 26 21:49:59 addons-989445 kubelet[1556]: I1226 21:49:59.467261    1556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ed4bf112-548e-4555-8bda-0a99f0ef89ec" path="/var/lib/kubelet/pods/ed4bf112-548e-4555-8bda-0a99f0ef89ec/volumes"
	
	
	==> storage-provisioner [ef87731137515cd7f08208abb2245bbe5ae6637bd6fb2bf963d8022e5c3ee1d1] <==
	I1226 21:46:19.067539       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1226 21:46:19.078199       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1226 21:46:19.078247       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1226 21:46:19.087484       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1226 21:46:19.087574       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59892805-0f59-4c3e-83cc-d53a33e5fc06", APIVersion:"v1", ResourceVersion:"905", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-989445_cd40efb6-99b8-4afd-b157-e61e3f1eacde became leader
	I1226 21:46:19.087674       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-989445_cd40efb6-99b8-4afd-b157-e61e3f1eacde!
	I1226 21:46:19.188482       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-989445_cd40efb6-99b8-4afd-b157-e61e3f1eacde!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-989445 -n addons-989445
helpers_test.go:261: (dbg) Run:  kubectl --context addons-989445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image load --daemon gcr.io/google-containers/addon-resizer:functional-131935 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-131935 image load --daemon gcr.io/google-containers/addon-resizer:functional-131935 --alsologtostderr: (7.801166735s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-131935 image ls: (2.243262709s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-131935" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (184.72s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-038954 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-038954 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.64066092s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-038954 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-038954 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [95ff709e-9119-43d2-bb27-55d27a41c744] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [95ff709e-9119-43d2-bb27-55d27a41c744] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.003420527s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-038954 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1226 21:57:11.686119   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:57:39.370794   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-038954 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.341504581s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-038954 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-038954 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1226 21:58:44.448877   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 21:58:44.454180   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 21:58:44.464408   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 21:58:44.484659   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 21:58:44.524888   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 21:58:44.605203   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 21:58:44.765613   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 21:58:45.086156   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 21:58:45.727041   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 21:58:47.008137   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.007795862s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-038954 addons disable ingress-dns --alsologtostderr -v=1
E1226 21:58:49.569170   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-038954 addons disable ingress-dns --alsologtostderr -v=1: (2.751808304s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-038954 addons disable ingress --alsologtostderr -v=1
E1226 21:58:54.689930   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-038954 addons disable ingress --alsologtostderr -v=1: (7.37629752s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-038954
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-038954:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae6dabd913b308b6416556feed33945e13295ceedb438d6bb86d813d3ed10a63",
	        "Created": "2023-12-26T21:54:46.918915672Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 53817,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T21:54:47.178565373Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/ae6dabd913b308b6416556feed33945e13295ceedb438d6bb86d813d3ed10a63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae6dabd913b308b6416556feed33945e13295ceedb438d6bb86d813d3ed10a63/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae6dabd913b308b6416556feed33945e13295ceedb438d6bb86d813d3ed10a63/hosts",
	        "LogPath": "/var/lib/docker/containers/ae6dabd913b308b6416556feed33945e13295ceedb438d6bb86d813d3ed10a63/ae6dabd913b308b6416556feed33945e13295ceedb438d6bb86d813d3ed10a63-json.log",
	        "Name": "/ingress-addon-legacy-038954",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-038954:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-038954",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/659676c4f824140d49542a5638832df1ea30a198165a68bbd013a5ddb5173608-init/diff:/var/lib/docker/overlay2/9309fabaee2d1c218955e7e97c12621fc2771807097b157c41ecafdb1f7c4f26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/659676c4f824140d49542a5638832df1ea30a198165a68bbd013a5ddb5173608/merged",
	                "UpperDir": "/var/lib/docker/overlay2/659676c4f824140d49542a5638832df1ea30a198165a68bbd013a5ddb5173608/diff",
	                "WorkDir": "/var/lib/docker/overlay2/659676c4f824140d49542a5638832df1ea30a198165a68bbd013a5ddb5173608/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-038954",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-038954/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-038954",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-038954",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-038954",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3737fd116aa49a3a44212e166e71f60ce601f009d5f0c94317a07f1443a4c76",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3737fd116aa",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-038954": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ae6dabd913b3",
	                        "ingress-addon-legacy-038954"
	                    ],
	                    "NetworkID": "01b16b674a8bd050c426ce9045742c46ae4a2677e6054e6d32afc29be9cd297e",
	                    "EndpointID": "a98f4dcad884ad5e9712621f5d7d7bf67f41a7e6acba24ac0f83d349730453f1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-038954 -n ingress-addon-legacy-038954
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-038954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-038954 logs -n 25: (1.027062592s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                   | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | -p functional-131935                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| update-context | functional-131935                    | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| ssh            | functional-131935 ssh findmnt        | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| service        | functional-131935 service            | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | --namespace=default --https          |                             |         |         |                     |                     |
	|                | --url hello-node                     |                             |         |         |                     |                     |
	| update-context | functional-131935                    | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| ssh            | functional-131935 ssh findmnt        | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| update-context | functional-131935                    | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-131935                    | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| mount          | -p functional-131935                 | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| service        | functional-131935                    | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | service hello-node --url             |                             |         |         |                     |                     |
	|                | --format={{.IP}}                     |                             |         |         |                     |                     |
	| image          | functional-131935                    | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-131935 ssh pgrep          | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-131935                    | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-131935 image build -t     | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | localhost/my-image:functional-131935 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-131935                    | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| service        | functional-131935 service            | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| image          | functional-131935 image ls           | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	| delete         | -p functional-131935                 | functional-131935           | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:54 UTC |
	| start          | -p ingress-addon-legacy-038954       | ingress-addon-legacy-038954 | jenkins | v1.32.0 | 26 Dec 23 21:54 UTC | 26 Dec 23 21:55 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-038954          | ingress-addon-legacy-038954 | jenkins | v1.32.0 | 26 Dec 23 21:55 UTC | 26 Dec 23 21:55 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-038954          | ingress-addon-legacy-038954 | jenkins | v1.32.0 | 26 Dec 23 21:55 UTC | 26 Dec 23 21:55 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-038954          | ingress-addon-legacy-038954 | jenkins | v1.32.0 | 26 Dec 23 21:56 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-038954 ip       | ingress-addon-legacy-038954 | jenkins | v1.32.0 | 26 Dec 23 21:58 UTC | 26 Dec 23 21:58 UTC |
	| addons         | ingress-addon-legacy-038954          | ingress-addon-legacy-038954 | jenkins | v1.32.0 | 26 Dec 23 21:58 UTC | 26 Dec 23 21:58 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-038954          | ingress-addon-legacy-038954 | jenkins | v1.32.0 | 26 Dec 23 21:58 UTC | 26 Dec 23 21:58 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:54:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:54:34.101750   53185 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:54:34.101928   53185 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:54:34.101946   53185 out.go:309] Setting ErrFile to fd 2...
	I1226 21:54:34.101951   53185 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:54:34.102147   53185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	I1226 21:54:34.102818   53185 out.go:303] Setting JSON to false
	I1226 21:54:34.103996   53185 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2224,"bootTime":1703625450,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 21:54:34.104059   53185 start.go:138] virtualization: kvm guest
	I1226 21:54:34.106560   53185 out.go:177] * [ingress-addon-legacy-038954] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1226 21:54:34.108074   53185 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 21:54:34.109527   53185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:54:34.108155   53185 notify.go:220] Checking for updates...
	I1226 21:54:34.112249   53185 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 21:54:34.113595   53185 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 21:54:34.114874   53185 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1226 21:54:34.116261   53185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 21:54:34.117737   53185 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 21:54:34.143147   53185 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:54:34.143309   53185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:54:34.193912   53185 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-26 21:54:34.185238679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 21:54:34.194016   53185 docker.go:295] overlay module found
	I1226 21:54:34.196108   53185 out.go:177] * Using the docker driver based on user configuration
	I1226 21:54:34.197527   53185 start.go:298] selected driver: docker
	I1226 21:54:34.197542   53185 start.go:902] validating driver "docker" against <nil>
	I1226 21:54:34.197556   53185 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 21:54:34.198352   53185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:54:34.249205   53185 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-26 21:54:34.241480553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 21:54:34.249369   53185 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 21:54:34.249665   53185 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 21:54:34.251631   53185 out.go:177] * Using Docker driver with root privileges
	I1226 21:54:34.252976   53185 cni.go:84] Creating CNI manager for ""
	I1226 21:54:34.253000   53185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:54:34.253020   53185 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 21:54:34.253034   53185 start_flags.go:323] config:
	{Name:ingress-addon-legacy-038954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-038954 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:54:34.254598   53185 out.go:177] * Starting control plane node ingress-addon-legacy-038954 in cluster ingress-addon-legacy-038954
	I1226 21:54:34.255823   53185 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 21:54:34.257127   53185 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 21:54:34.258589   53185 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 21:54:34.258681   53185 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 21:54:34.274805   53185 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 21:54:34.274826   53185 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 21:54:34.288542   53185 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1226 21:54:34.288562   53185 cache.go:56] Caching tarball of preloaded images
	I1226 21:54:34.288727   53185 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 21:54:34.290707   53185 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1226 21:54:34.291966   53185 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1226 21:54:34.324348   53185 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1226 21:54:38.652558   53185 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1226 21:54:38.652647   53185 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1226 21:54:39.659094   53185 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1226 21:54:39.659444   53185 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/config.json ...
	I1226 21:54:39.659476   53185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/config.json: {Name:mkd0687df95ae8b78a174da388135e41876a6634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:54:39.659694   53185 cache.go:194] Successfully downloaded all kic artifacts
	I1226 21:54:39.659725   53185 start.go:365] acquiring machines lock for ingress-addon-legacy-038954: {Name:mk8da7804a653a22d08667c2eb9bfd5f75c63bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 21:54:39.659794   53185 start.go:369] acquired machines lock for "ingress-addon-legacy-038954" in 49.128µs
	I1226 21:54:39.659817   53185 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-038954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-038954 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 21:54:39.659907   53185 start.go:125] createHost starting for "" (driver="docker")
	I1226 21:54:39.663640   53185 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1226 21:54:39.663881   53185 start.go:159] libmachine.API.Create for "ingress-addon-legacy-038954" (driver="docker")
	I1226 21:54:39.663906   53185 client.go:168] LocalClient.Create starting
	I1226 21:54:39.663987   53185 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem
	I1226 21:54:39.664033   53185 main.go:141] libmachine: Decoding PEM data...
	I1226 21:54:39.664056   53185 main.go:141] libmachine: Parsing certificate...
	I1226 21:54:39.664117   53185 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem
	I1226 21:54:39.664142   53185 main.go:141] libmachine: Decoding PEM data...
	I1226 21:54:39.664159   53185 main.go:141] libmachine: Parsing certificate...
	I1226 21:54:39.664518   53185 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-038954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 21:54:39.680305   53185 cli_runner.go:211] docker network inspect ingress-addon-legacy-038954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 21:54:39.680420   53185 network_create.go:281] running [docker network inspect ingress-addon-legacy-038954] to gather additional debugging logs...
	I1226 21:54:39.680444   53185 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-038954
	W1226 21:54:39.696263   53185 cli_runner.go:211] docker network inspect ingress-addon-legacy-038954 returned with exit code 1
	I1226 21:54:39.696302   53185 network_create.go:284] error running [docker network inspect ingress-addon-legacy-038954]: docker network inspect ingress-addon-legacy-038954: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-038954 not found
	I1226 21:54:39.696317   53185 network_create.go:286] output of [docker network inspect ingress-addon-legacy-038954]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-038954 not found
	
	** /stderr **
	I1226 21:54:39.696435   53185 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 21:54:39.711283   53185 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000132ea0}
	I1226 21:54:39.711333   53185 network_create.go:124] attempt to create docker network ingress-addon-legacy-038954 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1226 21:54:39.711385   53185 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-038954 ingress-addon-legacy-038954
	I1226 21:54:39.761603   53185 network_create.go:108] docker network ingress-addon-legacy-038954 192.168.49.0/24 created
	I1226 21:54:39.761637   53185 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-038954" container
	I1226 21:54:39.761696   53185 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 21:54:39.776534   53185 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-038954 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-038954 --label created_by.minikube.sigs.k8s.io=true
	I1226 21:54:39.792667   53185 oci.go:103] Successfully created a docker volume ingress-addon-legacy-038954
	I1226 21:54:39.792738   53185 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-038954-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-038954 --entrypoint /usr/bin/test -v ingress-addon-legacy-038954:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 21:54:41.513000   53185 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-038954-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-038954 --entrypoint /usr/bin/test -v ingress-addon-legacy-038954:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (1.720216s)
	I1226 21:54:41.513034   53185 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-038954
	I1226 21:54:41.513061   53185 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 21:54:41.513084   53185 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 21:54:41.513146   53185 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-038954:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 21:54:46.855459   53185 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-038954:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.342259924s)
	I1226 21:54:46.855510   53185 kic.go:203] duration metric: took 5.342425 seconds to extract preloaded images to volume
	W1226 21:54:46.855646   53185 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 21:54:46.855737   53185 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 21:54:46.904953   53185 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-038954 --name ingress-addon-legacy-038954 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-038954 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-038954 --network ingress-addon-legacy-038954 --ip 192.168.49.2 --volume ingress-addon-legacy-038954:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 21:54:47.186697   53185 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-038954 --format={{.State.Running}}
	I1226 21:54:47.203636   53185 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-038954 --format={{.State.Status}}
	I1226 21:54:47.220739   53185 cli_runner.go:164] Run: docker exec ingress-addon-legacy-038954 stat /var/lib/dpkg/alternatives/iptables
	I1226 21:54:47.284577   53185 oci.go:144] the created container "ingress-addon-legacy-038954" has a running status.
	I1226 21:54:47.284608   53185 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/ingress-addon-legacy-038954/id_rsa...
	I1226 21:54:47.643300   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/ingress-addon-legacy-038954/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1226 21:54:47.643350   53185 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-7214/.minikube/machines/ingress-addon-legacy-038954/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 21:54:47.666439   53185 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-038954 --format={{.State.Status}}
	I1226 21:54:47.687428   53185 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 21:54:47.687453   53185 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-038954 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 21:54:47.761463   53185 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-038954 --format={{.State.Status}}
	I1226 21:54:47.779376   53185 machine.go:88] provisioning docker machine ...
	I1226 21:54:47.779416   53185 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-038954"
	I1226 21:54:47.779500   53185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-038954
	I1226 21:54:47.797670   53185 main.go:141] libmachine: Using SSH client type: native
	I1226 21:54:47.798041   53185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1226 21:54:47.798063   53185 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-038954 && echo "ingress-addon-legacy-038954" | sudo tee /etc/hostname
	I1226 21:54:47.923617   53185 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-038954
	
	I1226 21:54:47.923690   53185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-038954
	I1226 21:54:47.940045   53185 main.go:141] libmachine: Using SSH client type: native
	I1226 21:54:47.940373   53185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1226 21:54:47.940395   53185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-038954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-038954/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-038954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 21:54:48.058435   53185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 21:54:48.058463   53185 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-7214/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-7214/.minikube}
	I1226 21:54:48.058499   53185 ubuntu.go:177] setting up certificates
	I1226 21:54:48.058512   53185 provision.go:83] configureAuth start
	I1226 21:54:48.058563   53185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-038954
	I1226 21:54:48.074521   53185 provision.go:138] copyHostCerts
	I1226 21:54:48.074553   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem
	I1226 21:54:48.074578   53185 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem, removing ...
	I1226 21:54:48.074587   53185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem
	I1226 21:54:48.074647   53185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem (1082 bytes)
	I1226 21:54:48.074742   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem
	I1226 21:54:48.074765   53185 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem, removing ...
	I1226 21:54:48.074772   53185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem
	I1226 21:54:48.074799   53185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem (1123 bytes)
	I1226 21:54:48.074843   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem
	I1226 21:54:48.074858   53185 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem, removing ...
	I1226 21:54:48.074864   53185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem
	I1226 21:54:48.074883   53185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem (1679 bytes)
	I1226 21:54:48.074933   53185 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-038954 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-038954]
	I1226 21:54:48.250004   53185 provision.go:172] copyRemoteCerts
	I1226 21:54:48.250061   53185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 21:54:48.250094   53185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-038954
	I1226 21:54:48.266256   53185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/ingress-addon-legacy-038954/id_rsa Username:docker}
	I1226 21:54:48.354803   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1226 21:54:48.354866   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 21:54:48.375177   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1226 21:54:48.375230   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1226 21:54:48.394901   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1226 21:54:48.394996   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 21:54:48.415209   53185 provision.go:86] duration metric: configureAuth took 356.682071ms
	I1226 21:54:48.415231   53185 ubuntu.go:193] setting minikube options for container-runtime
	I1226 21:54:48.415370   53185 config.go:182] Loaded profile config "ingress-addon-legacy-038954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1226 21:54:48.415450   53185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-038954
	I1226 21:54:48.431568   53185 main.go:141] libmachine: Using SSH client type: native
	I1226 21:54:48.431872   53185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1226 21:54:48.431889   53185 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 21:54:48.650977   53185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 21:54:48.651028   53185 machine.go:91] provisioned docker machine in 871.61722ms
	I1226 21:54:48.651040   53185 client.go:171] LocalClient.Create took 8.98712669s
	I1226 21:54:48.651063   53185 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-038954" took 8.987181359s
	I1226 21:54:48.651077   53185 start.go:300] post-start starting for "ingress-addon-legacy-038954" (driver="docker")
	I1226 21:54:48.651091   53185 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 21:54:48.651144   53185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 21:54:48.651186   53185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-038954
	I1226 21:54:48.666614   53185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/ingress-addon-legacy-038954/id_rsa Username:docker}
	I1226 21:54:48.755037   53185 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 21:54:48.757873   53185 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 21:54:48.757903   53185 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 21:54:48.757912   53185 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 21:54:48.757917   53185 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 21:54:48.757926   53185 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/addons for local assets ...
	I1226 21:54:48.757982   53185 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/files for local assets ...
	I1226 21:54:48.758067   53185 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> 139762.pem in /etc/ssl/certs
	I1226 21:54:48.758076   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> /etc/ssl/certs/139762.pem
	I1226 21:54:48.758157   53185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 21:54:48.765480   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem --> /etc/ssl/certs/139762.pem (1708 bytes)
	I1226 21:54:48.785662   53185 start.go:303] post-start completed in 134.562784ms
	I1226 21:54:48.785962   53185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-038954
	I1226 21:54:48.801484   53185 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/config.json ...
	I1226 21:54:48.801755   53185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 21:54:48.801805   53185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-038954
	I1226 21:54:48.817645   53185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/ingress-addon-legacy-038954/id_rsa Username:docker}
	I1226 21:54:48.898867   53185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 21:54:48.902483   53185 start.go:128] duration metric: createHost completed in 9.242564128s
	I1226 21:54:48.902503   53185 start.go:83] releasing machines lock for "ingress-addon-legacy-038954", held for 9.242698086s
	I1226 21:54:48.902559   53185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-038954
	I1226 21:54:48.918080   53185 ssh_runner.go:195] Run: cat /version.json
	I1226 21:54:48.918120   53185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-038954
	I1226 21:54:48.918166   53185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 21:54:48.918213   53185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-038954
	I1226 21:54:48.933755   53185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/ingress-addon-legacy-038954/id_rsa Username:docker}
	I1226 21:54:48.935280   53185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/ingress-addon-legacy-038954/id_rsa Username:docker}
	I1226 21:54:49.021670   53185 ssh_runner.go:195] Run: systemctl --version
	I1226 21:54:49.109590   53185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 21:54:49.244017   53185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 21:54:49.248496   53185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 21:54:49.265605   53185 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 21:54:49.265690   53185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 21:54:49.291340   53185 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1226 21:54:49.291362   53185 start.go:475] detecting cgroup driver to use...
	I1226 21:54:49.291386   53185 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 21:54:49.291421   53185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 21:54:49.303956   53185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 21:54:49.313141   53185 docker.go:203] disabling cri-docker service (if available) ...
	I1226 21:54:49.313190   53185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 21:54:49.324492   53185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 21:54:49.336361   53185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 21:54:49.410074   53185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 21:54:49.483276   53185 docker.go:219] disabling docker service ...
	I1226 21:54:49.483333   53185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 21:54:49.499966   53185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 21:54:49.509618   53185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 21:54:49.582100   53185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 21:54:49.659291   53185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 21:54:49.669259   53185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 21:54:49.683150   53185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1226 21:54:49.683222   53185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:54:49.692009   53185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 21:54:49.692086   53185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:54:49.700767   53185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:54:49.708869   53185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:54:49.716822   53185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 21:54:49.724052   53185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 21:54:49.730707   53185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 21:54:49.737318   53185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 21:54:49.805669   53185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 21:54:49.904774   53185 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 21:54:49.904826   53185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 21:54:49.907975   53185 start.go:543] Will wait 60s for crictl version
	I1226 21:54:49.908018   53185 ssh_runner.go:195] Run: which crictl
	I1226 21:54:49.910876   53185 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 21:54:49.940911   53185 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 21:54:49.940979   53185 ssh_runner.go:195] Run: crio --version
	I1226 21:54:49.971375   53185 ssh_runner.go:195] Run: crio --version
	I1226 21:54:50.004208   53185 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1226 21:54:50.005616   53185 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-038954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 21:54:50.020730   53185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1226 21:54:50.024000   53185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 21:54:50.033406   53185 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 21:54:50.033456   53185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 21:54:50.074673   53185 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1226 21:54:50.074743   53185 ssh_runner.go:195] Run: which lz4
	I1226 21:54:50.077806   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1226 21:54:50.077897   53185 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1226 21:54:50.080700   53185 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1226 21:54:50.080725   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1226 21:54:50.965387   53185 crio.go:444] Took 0.887517 seconds to copy over tarball
	I1226 21:54:50.965489   53185 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1226 21:54:53.186856   53185 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.221328583s)
	I1226 21:54:53.186901   53185 crio.go:451] Took 2.221477 seconds to extract the tarball
	I1226 21:54:53.186910   53185 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1226 21:54:53.255369   53185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 21:54:53.288369   53185 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1226 21:54:53.288398   53185 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1226 21:54:53.288481   53185 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 21:54:53.288501   53185 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 21:54:53.288510   53185 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 21:54:53.288528   53185 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 21:54:53.288549   53185 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1226 21:54:53.288512   53185 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1226 21:54:53.288469   53185 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 21:54:53.288689   53185 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1226 21:54:53.289923   53185 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1226 21:54:53.289932   53185 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 21:54:53.289945   53185 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 21:54:53.289923   53185 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 21:54:53.289924   53185 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1226 21:54:53.290014   53185 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1226 21:54:53.290059   53185 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 21:54:53.289929   53185 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 21:54:53.455222   53185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 21:54:53.490819   53185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1226 21:54:53.526240   53185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1226 21:54:53.532105   53185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1226 21:54:53.535409   53185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1226 21:54:53.547233   53185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1226 21:54:53.586277   53185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 21:54:53.589748   53185 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1226 21:54:53.589782   53185 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1226 21:54:53.589788   53185 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1226 21:54:53.589810   53185 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 21:54:53.589830   53185 ssh_runner.go:195] Run: which crictl
	I1226 21:54:53.589848   53185 ssh_runner.go:195] Run: which crictl
	I1226 21:54:53.589861   53185 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1226 21:54:53.589885   53185 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1226 21:54:53.589919   53185 ssh_runner.go:195] Run: which crictl
	I1226 21:54:53.589929   53185 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1226 21:54:53.589946   53185 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1226 21:54:53.589959   53185 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 21:54:53.589982   53185 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1226 21:54:53.589994   53185 ssh_runner.go:195] Run: which crictl
	I1226 21:54:53.590016   53185 ssh_runner.go:195] Run: which crictl
	I1226 21:54:53.612633   53185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1226 21:54:53.621265   53185 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1226 21:54:53.621300   53185 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 21:54:53.621334   53185 ssh_runner.go:195] Run: which crictl
	I1226 21:54:53.621345   53185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1226 21:54:53.621380   53185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1226 21:54:53.621451   53185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1226 21:54:53.621466   53185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1226 21:54:53.621534   53185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1226 21:54:53.767849   53185 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1226 21:54:53.767892   53185 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 21:54:53.767931   53185 ssh_runner.go:195] Run: which crictl
	I1226 21:54:53.772491   53185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1226 21:54:53.772543   53185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 21:54:53.772561   53185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1226 21:54:53.772591   53185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1226 21:54:53.772671   53185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1226 21:54:53.772802   53185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1226 21:54:53.772804   53185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1226 21:54:53.802937   53185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1226 21:54:53.804302   53185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1226 21:54:53.804351   53185 cache_images.go:92] LoadImages completed in 515.941155ms
	W1226 21:54:53.804419   53185 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1226 21:54:53.804473   53185 ssh_runner.go:195] Run: crio config
	I1226 21:54:53.881528   53185 cni.go:84] Creating CNI manager for ""
	I1226 21:54:53.881551   53185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:54:53.881566   53185 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 21:54:53.881583   53185 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-038954 NodeName:ingress-addon-legacy-038954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1226 21:54:53.881750   53185 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-038954"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 21:54:53.881856   53185 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-038954 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-038954 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 21:54:53.881925   53185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1226 21:54:53.889835   53185 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 21:54:53.889915   53185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 21:54:53.897887   53185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1226 21:54:53.913142   53185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1226 21:54:53.928564   53185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1226 21:54:53.943709   53185 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1226 21:54:53.946559   53185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 21:54:53.955559   53185 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954 for IP: 192.168.49.2
	I1226 21:54:53.955585   53185 certs.go:190] acquiring lock for shared ca certs: {Name:mk3336638bd66053c32b2c7f6f2d1c6a563fd761 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:54:53.955743   53185 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key
	I1226 21:54:53.955790   53185 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key
	I1226 21:54:53.955837   53185 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.key
	I1226 21:54:53.955852   53185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt with IP's: []
	I1226 21:54:54.144704   53185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt ...
	I1226 21:54:54.144735   53185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: {Name:mk9e271d2056a4af54547827025fcbd4d1124c8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:54:54.144938   53185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.key ...
	I1226 21:54:54.144967   53185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.key: {Name:mkd030c68cafac56febd544a3468e266e4a54b82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:54:54.145097   53185 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.key.dd3b5fb2
	I1226 21:54:54.145116   53185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 21:54:54.253723   53185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.crt.dd3b5fb2 ...
	I1226 21:54:54.253753   53185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.crt.dd3b5fb2: {Name:mk8690f38e51d92d9eeaea4887be16de18b3933a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:54:54.253922   53185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.key.dd3b5fb2 ...
	I1226 21:54:54.253939   53185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.key.dd3b5fb2: {Name:mk3c2d59441565273037d857962ea5ed0258ef48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:54:54.254036   53185 certs.go:337] copying /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.crt
	I1226 21:54:54.254108   53185 certs.go:341] copying /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.key
	I1226 21:54:54.254155   53185 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/proxy-client.key
	I1226 21:54:54.254169   53185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/proxy-client.crt with IP's: []
	I1226 21:54:54.345160   53185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/proxy-client.crt ...
	I1226 21:54:54.345212   53185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/proxy-client.crt: {Name:mk76f661f5e7479dd212a18e660ab34843ccb794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:54:54.345518   53185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/proxy-client.key ...
	I1226 21:54:54.345547   53185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/proxy-client.key: {Name:mk1c26a3d60dc1cb0a340bd7c4719960abcc30e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:54:54.345658   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1226 21:54:54.345677   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1226 21:54:54.345688   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1226 21:54:54.345701   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1226 21:54:54.345713   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 21:54:54.345727   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1226 21:54:54.345741   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 21:54:54.345754   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 21:54:54.345828   53185 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976.pem (1338 bytes)
	W1226 21:54:54.345870   53185 certs.go:433] ignoring /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976_empty.pem, impossibly tiny 0 bytes
	I1226 21:54:54.345884   53185 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem (1679 bytes)
	I1226 21:54:54.345910   53185 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem (1082 bytes)
	I1226 21:54:54.345936   53185 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem (1123 bytes)
	I1226 21:54:54.345966   53185 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem (1679 bytes)
	I1226 21:54:54.346010   53185 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem (1708 bytes)
	I1226 21:54:54.346043   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976.pem -> /usr/share/ca-certificates/13976.pem
	I1226 21:54:54.346056   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> /usr/share/ca-certificates/139762.pem
	I1226 21:54:54.346068   53185 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:54:54.346816   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 21:54:54.367830   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1226 21:54:54.387799   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 21:54:54.407495   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1226 21:54:54.426952   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 21:54:54.446641   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 21:54:54.466917   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 21:54:54.486468   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 21:54:54.505672   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976.pem --> /usr/share/ca-certificates/13976.pem (1338 bytes)
	I1226 21:54:54.525243   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem --> /usr/share/ca-certificates/139762.pem (1708 bytes)
	I1226 21:54:54.544543   53185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 21:54:54.565072   53185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 21:54:54.579805   53185 ssh_runner.go:195] Run: openssl version
	I1226 21:54:54.584525   53185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 21:54:54.592202   53185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:54:54.595510   53185 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:54:54.595557   53185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:54:54.601537   53185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 21:54:54.609200   53185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13976.pem && ln -fs /usr/share/ca-certificates/13976.pem /etc/ssl/certs/13976.pem"
	I1226 21:54:54.616905   53185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13976.pem
	I1226 21:54:54.619658   53185 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 21:51 /usr/share/ca-certificates/13976.pem
	I1226 21:54:54.619703   53185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13976.pem
	I1226 21:54:54.625513   53185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13976.pem /etc/ssl/certs/51391683.0"
	I1226 21:54:54.633608   53185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139762.pem && ln -fs /usr/share/ca-certificates/139762.pem /etc/ssl/certs/139762.pem"
	I1226 21:54:54.641277   53185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139762.pem
	I1226 21:54:54.644079   53185 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 21:51 /usr/share/ca-certificates/139762.pem
	I1226 21:54:54.644121   53185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139762.pem
	I1226 21:54:54.649953   53185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139762.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 21:54:54.657441   53185 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 21:54:54.660442   53185 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 21:54:54.660488   53185 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-038954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-038954 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:54:54.660550   53185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1226 21:54:54.660584   53185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 21:54:54.691381   53185 cri.go:89] found id: ""
	I1226 21:54:54.691439   53185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 21:54:54.699153   53185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 21:54:54.706336   53185 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1226 21:54:54.706384   53185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 21:54:54.713427   53185 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 21:54:54.713464   53185 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1226 21:54:54.754635   53185 kubeadm.go:322] W1226 21:54:54.754080    1378 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1226 21:54:54.790279   53185 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1226 21:54:54.854156   53185 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 21:54:57.259416   53185 kubeadm.go:322] W1226 21:54:57.259069    1378 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1226 21:54:57.260220   53185 kubeadm.go:322] W1226 21:54:57.259985    1378 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1226 21:55:04.717026   53185 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1226 21:55:04.717081   53185 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 21:55:04.717154   53185 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1226 21:55:04.717207   53185 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1226 21:55:04.717237   53185 kubeadm.go:322] OS: Linux
	I1226 21:55:04.717279   53185 kubeadm.go:322] CGROUPS_CPU: enabled
	I1226 21:55:04.717319   53185 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1226 21:55:04.717365   53185 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1226 21:55:04.717406   53185 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1226 21:55:04.717485   53185 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1226 21:55:04.717579   53185 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1226 21:55:04.717685   53185 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 21:55:04.717794   53185 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 21:55:04.717909   53185 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 21:55:04.718032   53185 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 21:55:04.718102   53185 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 21:55:04.718175   53185 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 21:55:04.718275   53185 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 21:55:04.719937   53185 out.go:204]   - Generating certificates and keys ...
	I1226 21:55:04.720018   53185 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 21:55:04.720080   53185 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 21:55:04.720171   53185 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 21:55:04.720249   53185 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 21:55:04.720321   53185 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 21:55:04.720369   53185 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 21:55:04.720414   53185 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 21:55:04.720560   53185 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-038954 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 21:55:04.720623   53185 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 21:55:04.720777   53185 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-038954 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 21:55:04.720875   53185 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 21:55:04.720963   53185 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 21:55:04.721019   53185 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 21:55:04.721098   53185 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 21:55:04.721175   53185 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 21:55:04.721245   53185 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 21:55:04.721327   53185 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 21:55:04.721399   53185 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 21:55:04.721459   53185 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 21:55:04.722905   53185 out.go:204]   - Booting up control plane ...
	I1226 21:55:04.722974   53185 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 21:55:04.723064   53185 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 21:55:04.723156   53185 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 21:55:04.723256   53185 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 21:55:04.723411   53185 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 21:55:04.723509   53185 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.002201 seconds
	I1226 21:55:04.723665   53185 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 21:55:04.723865   53185 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 21:55:04.723940   53185 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1226 21:55:04.724114   53185 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-038954 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1226 21:55:04.724166   53185 kubeadm.go:322] [bootstrap-token] Using token: 01f967.nr7uj16ctsgfgid4
	I1226 21:55:04.726635   53185 out.go:204]   - Configuring RBAC rules ...
	I1226 21:55:04.726740   53185 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 21:55:04.726810   53185 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 21:55:04.726926   53185 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 21:55:04.727065   53185 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 21:55:04.727242   53185 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 21:55:04.727355   53185 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 21:55:04.727485   53185 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 21:55:04.727537   53185 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1226 21:55:04.727575   53185 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1226 21:55:04.727581   53185 kubeadm.go:322] 
	I1226 21:55:04.727628   53185 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1226 21:55:04.727634   53185 kubeadm.go:322] 
	I1226 21:55:04.727694   53185 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1226 21:55:04.727702   53185 kubeadm.go:322] 
	I1226 21:55:04.727728   53185 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1226 21:55:04.727776   53185 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 21:55:04.727821   53185 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 21:55:04.727827   53185 kubeadm.go:322] 
	I1226 21:55:04.727889   53185 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1226 21:55:04.727971   53185 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 21:55:04.728040   53185 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 21:55:04.728047   53185 kubeadm.go:322] 
	I1226 21:55:04.728135   53185 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1226 21:55:04.728238   53185 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1226 21:55:04.728246   53185 kubeadm.go:322] 
	I1226 21:55:04.728335   53185 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 01f967.nr7uj16ctsgfgid4 \
	I1226 21:55:04.728455   53185 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc \
	I1226 21:55:04.728492   53185 kubeadm.go:322]     --control-plane 
	I1226 21:55:04.728502   53185 kubeadm.go:322] 
	I1226 21:55:04.728613   53185 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1226 21:55:04.728624   53185 kubeadm.go:322] 
	I1226 21:55:04.728791   53185 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 01f967.nr7uj16ctsgfgid4 \
	I1226 21:55:04.728915   53185 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc 
	I1226 21:55:04.728929   53185 cni.go:84] Creating CNI manager for ""
	I1226 21:55:04.728940   53185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:55:04.730450   53185 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 21:55:04.732032   53185 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 21:55:04.735572   53185 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1226 21:55:04.735589   53185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 21:55:04.751574   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 21:55:05.238376   53185 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 21:55:05.238463   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:05.238475   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=ingress-addon-legacy-038954 minikube.k8s.io/updated_at=2023_12_26T21_55_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:05.312380   53185 ops.go:34] apiserver oom_adj: -16
	I1226 21:55:05.312384   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:05.812536   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:06.312783   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:06.813316   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:07.312761   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:07.813061   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:08.312648   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:08.813314   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:09.313169   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:09.812779   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:10.312720   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:10.812981   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:11.312885   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:11.812764   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:12.313391   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:12.812886   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:13.312804   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:13.813214   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:14.312952   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:14.812615   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:15.313348   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:15.812819   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:16.312499   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:16.812845   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:17.313186   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:17.813323   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:18.313263   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:18.813427   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:19.312557   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:19.812672   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:20.312525   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:20.813178   53185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:55:20.876054   53185 kubeadm.go:1088] duration metric: took 15.637658423s to wait for elevateKubeSystemPrivileges.
	I1226 21:55:20.876089   53185 kubeadm.go:406] StartCluster complete in 26.215608153s
	I1226 21:55:20.876106   53185 settings.go:142] acquiring lock: {Name:mk12d34f71cd28d3e5987ed147ca378c18cddf69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:55:20.876164   53185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 21:55:20.876810   53185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/kubeconfig: {Name:mkba7ef3601947363f4aefe62b6956e6c044a4a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:55:20.877042   53185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 21:55:20.877160   53185 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1226 21:55:20.877239   53185 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-038954"
	I1226 21:55:20.877253   53185 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-038954"
	I1226 21:55:20.877269   53185 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-038954"
	I1226 21:55:20.877279   53185 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-038954"
	I1226 21:55:20.877303   53185 config.go:182] Loaded profile config "ingress-addon-legacy-038954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1226 21:55:20.877326   53185 host.go:66] Checking if "ingress-addon-legacy-038954" exists ...
	I1226 21:55:20.877690   53185 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-038954 --format={{.State.Status}}
	I1226 21:55:20.877655   53185 kapi.go:59] client config for ingress-addon-legacy-038954: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.key", CAFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 21:55:20.877877   53185 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-038954 --format={{.State.Status}}
	I1226 21:55:20.878361   53185 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 21:55:20.905743   53185 kapi.go:59] client config for ingress-addon-legacy-038954: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.key", CAFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 21:55:20.906093   53185 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-038954"
	I1226 21:55:20.906134   53185 host.go:66] Checking if "ingress-addon-legacy-038954" exists ...
	I1226 21:55:20.906673   53185 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-038954 --format={{.State.Status}}
	I1226 21:55:20.908976   53185 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 21:55:20.910901   53185 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 21:55:20.910920   53185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 21:55:20.910988   53185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-038954
	I1226 21:55:20.926244   53185 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 21:55:20.926265   53185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 21:55:20.926364   53185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-038954
	I1226 21:55:20.937310   53185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/ingress-addon-legacy-038954/id_rsa Username:docker}
	I1226 21:55:20.943029   53185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/ingress-addon-legacy-038954/id_rsa Username:docker}
	I1226 21:55:21.058852   53185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1226 21:55:21.075954   53185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 21:55:21.076994   53185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 21:55:21.381943   53185 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-038954" context rescaled to 1 replicas
	I1226 21:55:21.381989   53185 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 21:55:21.384091   53185 out.go:177] * Verifying Kubernetes components...
	I1226 21:55:21.385944   53185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 21:55:21.575752   53185 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1226 21:55:21.686692   53185 kapi.go:59] client config for ingress-addon-legacy-038954: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.key", CAFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 21:55:21.687055   53185 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-038954" to be "Ready" ...
	I1226 21:55:21.692400   53185 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1226 21:55:21.693897   53185 addons.go:508] enable addons completed in 816.733484ms: enabled=[storage-provisioner default-storageclass]
	I1226 21:55:23.690492   53185 node_ready.go:58] node "ingress-addon-legacy-038954" has status "Ready":"False"
	I1226 21:55:25.690844   53185 node_ready.go:58] node "ingress-addon-legacy-038954" has status "Ready":"False"
	I1226 21:55:27.690882   53185 node_ready.go:58] node "ingress-addon-legacy-038954" has status "Ready":"False"
	I1226 21:55:30.190111   53185 node_ready.go:58] node "ingress-addon-legacy-038954" has status "Ready":"False"
	I1226 21:55:32.190298   53185 node_ready.go:58] node "ingress-addon-legacy-038954" has status "Ready":"False"
	I1226 21:55:34.190588   53185 node_ready.go:58] node "ingress-addon-legacy-038954" has status "Ready":"False"
	I1226 21:55:35.190462   53185 node_ready.go:49] node "ingress-addon-legacy-038954" has status "Ready":"True"
	I1226 21:55:35.190493   53185 node_ready.go:38] duration metric: took 13.503401645s waiting for node "ingress-addon-legacy-038954" to be "Ready" ...
	I1226 21:55:35.190503   53185 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 21:55:35.196562   53185 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-cbldn" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:37.200189   53185 pod_ready.go:102] pod "coredns-66bff467f8-cbldn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-26 21:55:20 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1226 21:55:39.699584   53185 pod_ready.go:102] pod "coredns-66bff467f8-cbldn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-26 21:55:20 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1226 21:55:41.702117   53185 pod_ready.go:102] pod "coredns-66bff467f8-cbldn" in "kube-system" namespace has status "Ready":"False"
	I1226 21:55:43.701583   53185 pod_ready.go:92] pod "coredns-66bff467f8-cbldn" in "kube-system" namespace has status "Ready":"True"
	I1226 21:55:43.701609   53185 pod_ready.go:81] duration metric: took 8.505026399s waiting for pod "coredns-66bff467f8-cbldn" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:43.701618   53185 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-038954" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:43.705495   53185 pod_ready.go:92] pod "etcd-ingress-addon-legacy-038954" in "kube-system" namespace has status "Ready":"True"
	I1226 21:55:43.705514   53185 pod_ready.go:81] duration metric: took 3.890483ms waiting for pod "etcd-ingress-addon-legacy-038954" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:43.705524   53185 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-038954" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:43.709565   53185 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-038954" in "kube-system" namespace has status "Ready":"True"
	I1226 21:55:43.709587   53185 pod_ready.go:81] duration metric: took 4.056393ms waiting for pod "kube-apiserver-ingress-addon-legacy-038954" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:43.709596   53185 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-038954" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:43.713513   53185 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-038954" in "kube-system" namespace has status "Ready":"True"
	I1226 21:55:43.713531   53185 pod_ready.go:81] duration metric: took 3.928047ms waiting for pod "kube-controller-manager-ingress-addon-legacy-038954" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:43.713539   53185 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sh5sp" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:43.717470   53185 pod_ready.go:92] pod "kube-proxy-sh5sp" in "kube-system" namespace has status "Ready":"True"
	I1226 21:55:43.717492   53185 pod_ready.go:81] duration metric: took 3.946956ms waiting for pod "kube-proxy-sh5sp" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:43.717499   53185 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-038954" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:43.897897   53185 request.go:629] Waited for 180.337348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-038954
	I1226 21:55:44.097501   53185 request.go:629] Waited for 197.078926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-038954
	I1226 21:55:44.100088   53185 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-038954" in "kube-system" namespace has status "Ready":"True"
	I1226 21:55:44.100113   53185 pod_ready.go:81] duration metric: took 382.606169ms waiting for pod "kube-scheduler-ingress-addon-legacy-038954" in "kube-system" namespace to be "Ready" ...
	I1226 21:55:44.100128   53185 pod_ready.go:38] duration metric: took 8.909614335s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 21:55:44.100146   53185 api_server.go:52] waiting for apiserver process to appear ...
	I1226 21:55:44.100216   53185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 21:55:44.110184   53185 api_server.go:72] duration metric: took 22.728166372s to wait for apiserver process to appear ...
	I1226 21:55:44.110203   53185 api_server.go:88] waiting for apiserver healthz status ...
	I1226 21:55:44.110219   53185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1226 21:55:44.114736   53185 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1226 21:55:44.115509   53185 api_server.go:141] control plane version: v1.18.20
	I1226 21:55:44.115529   53185 api_server.go:131] duration metric: took 5.320967ms to wait for apiserver health ...
	I1226 21:55:44.115537   53185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 21:55:44.297856   53185 request.go:629] Waited for 182.246618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1226 21:55:44.303353   53185 system_pods.go:59] 8 kube-system pods found
	I1226 21:55:44.303387   53185 system_pods.go:61] "coredns-66bff467f8-cbldn" [10ef3598-1e0b-4b17-973d-ecf4dfc2343c] Running
	I1226 21:55:44.303395   53185 system_pods.go:61] "etcd-ingress-addon-legacy-038954" [67173adb-2a6c-4928-a47a-81b98233ce56] Running
	I1226 21:55:44.303406   53185 system_pods.go:61] "kindnet-ttq8s" [7c0a571e-c674-47a7-8d19-25b8d9db6d3e] Running
	I1226 21:55:44.303413   53185 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-038954" [46575aed-04ca-46ab-872c-a5e947bab714] Running
	I1226 21:55:44.303420   53185 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-038954" [c0152bcf-0fab-4436-973b-50b15e79f6bf] Running
	I1226 21:55:44.303427   53185 system_pods.go:61] "kube-proxy-sh5sp" [c5a4c04e-720b-45bb-9792-cd94b4a4f6c4] Running
	I1226 21:55:44.303437   53185 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-038954" [b1f9c2d3-8ba6-42fe-9349-e0a7d3272a94] Running
	I1226 21:55:44.303442   53185 system_pods.go:61] "storage-provisioner" [fa19d669-201f-426c-905b-e19fae064649] Running
	I1226 21:55:44.303457   53185 system_pods.go:74] duration metric: took 187.914906ms to wait for pod list to return data ...
	I1226 21:55:44.303467   53185 default_sa.go:34] waiting for default service account to be created ...
	I1226 21:55:44.497783   53185 request.go:629] Waited for 194.234052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1226 21:55:44.499932   53185 default_sa.go:45] found service account: "default"
	I1226 21:55:44.499962   53185 default_sa.go:55] duration metric: took 196.485025ms for default service account to be created ...
	I1226 21:55:44.499973   53185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 21:55:44.697413   53185 request.go:629] Waited for 197.353775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1226 21:55:44.703281   53185 system_pods.go:86] 8 kube-system pods found
	I1226 21:55:44.703311   53185 system_pods.go:89] "coredns-66bff467f8-cbldn" [10ef3598-1e0b-4b17-973d-ecf4dfc2343c] Running
	I1226 21:55:44.703319   53185 system_pods.go:89] "etcd-ingress-addon-legacy-038954" [67173adb-2a6c-4928-a47a-81b98233ce56] Running
	I1226 21:55:44.703326   53185 system_pods.go:89] "kindnet-ttq8s" [7c0a571e-c674-47a7-8d19-25b8d9db6d3e] Running
	I1226 21:55:44.703332   53185 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-038954" [46575aed-04ca-46ab-872c-a5e947bab714] Running
	I1226 21:55:44.703337   53185 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-038954" [c0152bcf-0fab-4436-973b-50b15e79f6bf] Running
	I1226 21:55:44.703343   53185 system_pods.go:89] "kube-proxy-sh5sp" [c5a4c04e-720b-45bb-9792-cd94b4a4f6c4] Running
	I1226 21:55:44.703349   53185 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-038954" [b1f9c2d3-8ba6-42fe-9349-e0a7d3272a94] Running
	I1226 21:55:44.703356   53185 system_pods.go:89] "storage-provisioner" [fa19d669-201f-426c-905b-e19fae064649] Running
	I1226 21:55:44.703369   53185 system_pods.go:126] duration metric: took 203.383834ms to wait for k8s-apps to be running ...
	I1226 21:55:44.703387   53185 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 21:55:44.703446   53185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 21:55:44.713676   53185 system_svc.go:56] duration metric: took 10.284599ms WaitForService to wait for kubelet.
	I1226 21:55:44.713697   53185 kubeadm.go:581] duration metric: took 23.331681578s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 21:55:44.713719   53185 node_conditions.go:102] verifying NodePressure condition ...
	I1226 21:55:44.897044   53185 request.go:629] Waited for 183.254558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1226 21:55:44.899412   53185 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1226 21:55:44.899437   53185 node_conditions.go:123] node cpu capacity is 8
	I1226 21:55:44.899447   53185 node_conditions.go:105] duration metric: took 185.723764ms to run NodePressure ...
	I1226 21:55:44.899457   53185 start.go:228] waiting for startup goroutines ...
	I1226 21:55:44.899463   53185 start.go:233] waiting for cluster config update ...
	I1226 21:55:44.899482   53185 start.go:242] writing updated cluster config ...
	I1226 21:55:44.899706   53185 ssh_runner.go:195] Run: rm -f paused
	I1226 21:55:44.943891   53185 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I1226 21:55:44.945798   53185 out.go:177] 
	W1226 21:55:44.947115   53185 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I1226 21:55:44.948504   53185 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1226 21:55:44.949909   53185 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-038954" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 26 21:58:35 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:35.110482940Z" level=info msg="Started container" PID=4871 containerID=0896b2852c6ddbd22c63f3eba678e2d5b447629b55e177d4c71e698164da5e74 description=default/hello-world-app-5f5d8b66bb-7zq6v/hello-world-app id=7c22df28-1967-450d-b40c-bb1ab44dd317 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=5ac927fd00edd019d636bf567f9603136749eb5954cef697213679a87a3a7870
	Dec 26 21:58:46 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:46.887207521Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=8f4e26d5-019b-4547-b2e8-12bd6fca37e5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 21:58:50 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:50.887921933Z" level=info msg="Stopping pod sandbox: 9955f82f24721be0ddef345b3a9718899e2bc541334d6a2cbbe10e99447d497f" id=beb50992-898d-4871-9291-79f06f31146c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 26 21:58:50 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:50.888825622Z" level=info msg="Stopped pod sandbox: 9955f82f24721be0ddef345b3a9718899e2bc541334d6a2cbbe10e99447d497f" id=beb50992-898d-4871-9291-79f06f31146c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 26 21:58:51 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:51.314878271Z" level=info msg="Stopping pod sandbox: 9955f82f24721be0ddef345b3a9718899e2bc541334d6a2cbbe10e99447d497f" id=7ab14751-8e6f-4bf6-af9b-4f56a0f73723 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 26 21:58:51 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:51.314931874Z" level=info msg="Stopped pod sandbox (already stopped): 9955f82f24721be0ddef345b3a9718899e2bc541334d6a2cbbe10e99447d497f" id=7ab14751-8e6f-4bf6-af9b-4f56a0f73723 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 26 21:58:52 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:52.050283996Z" level=info msg="Stopping container: 6e623a18eafec18220f0065608757cec9512a73ac3315910bfa89a348ff1f1c7 (timeout: 2s)" id=9ace0144-2710-45c2-bc6d-55a8f2feb37b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 26 21:58:52 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:52.052406860Z" level=info msg="Stopping container: 6e623a18eafec18220f0065608757cec9512a73ac3315910bfa89a348ff1f1c7 (timeout: 2s)" id=1cf16db9-a325-4a3b-9c11-47bc885f0e9f name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 26 21:58:52 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:52.886774535Z" level=info msg="Stopping pod sandbox: 9955f82f24721be0ddef345b3a9718899e2bc541334d6a2cbbe10e99447d497f" id=8100e34c-92fe-45a1-90c4-9c2471b9d650 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 26 21:58:52 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:52.886836395Z" level=info msg="Stopped pod sandbox (already stopped): 9955f82f24721be0ddef345b3a9718899e2bc541334d6a2cbbe10e99447d497f" id=8100e34c-92fe-45a1-90c4-9c2471b9d650 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.058677049Z" level=warning msg="Stopping container 6e623a18eafec18220f0065608757cec9512a73ac3315910bfa89a348ff1f1c7 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=9ace0144-2710-45c2-bc6d-55a8f2feb37b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 26 21:58:54 ingress-addon-legacy-038954 conmon[3401]: conmon 6e623a18eafec18220f0 <ninfo>: container 3413 exited with status 137
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.202799053Z" level=info msg="Stopped container 6e623a18eafec18220f0065608757cec9512a73ac3315910bfa89a348ff1f1c7: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dzt62/controller" id=1cf16db9-a325-4a3b-9c11-47bc885f0e9f name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.203377975Z" level=info msg="Stopped container 6e623a18eafec18220f0065608757cec9512a73ac3315910bfa89a348ff1f1c7: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dzt62/controller" id=9ace0144-2710-45c2-bc6d-55a8f2feb37b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.203414780Z" level=info msg="Stopping pod sandbox: 0175fe514ff16453584dd6180b42dd55d2797a28aff36a4e2fcd6bcdd65d3775" id=c3af13d3-6083-453b-acee-5e13c96a0714 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.203730796Z" level=info msg="Stopping pod sandbox: 0175fe514ff16453584dd6180b42dd55d2797a28aff36a4e2fcd6bcdd65d3775" id=a1bff629-af84-4d5d-9eaf-efca408ed88c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.206356061Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-Y4WPZGKJYK35QRTQ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-GFAO4KBCZS3FZSUK - [0:0]\n-X KUBE-HP-Y4WPZGKJYK35QRTQ\n-X KUBE-HP-GFAO4KBCZS3FZSUK\nCOMMIT\n"
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.207554587Z" level=info msg="Closing host port tcp:80"
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.207590239Z" level=info msg="Closing host port tcp:443"
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.208543657Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.208563354Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.208679804Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-dzt62 Namespace:ingress-nginx ID:0175fe514ff16453584dd6180b42dd55d2797a28aff36a4e2fcd6bcdd65d3775 UID:27c7b40d-3edb-41ca-8808-1048d52f604d NetNS:/var/run/netns/92ba5a39-7a48-490c-a5d2-5793d01c6508 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.208792378Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-dzt62 from CNI network \"kindnet\" (type=ptp)"
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.247975486Z" level=info msg="Stopped pod sandbox: 0175fe514ff16453584dd6180b42dd55d2797a28aff36a4e2fcd6bcdd65d3775" id=c3af13d3-6083-453b-acee-5e13c96a0714 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 26 21:58:54 ingress-addon-legacy-038954 crio[955]: time="2023-12-26 21:58:54.248100737Z" level=info msg="Stopped pod sandbox (already stopped): 0175fe514ff16453584dd6180b42dd55d2797a28aff36a4e2fcd6bcdd65d3775" id=a1bff629-af84-4d5d-9eaf-efca408ed88c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0896b2852c6dd       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            24 seconds ago      Running             hello-world-app           0                   5ac927fd00edd       hello-world-app-5f5d8b66bb-7zq6v
	9a6149f90e15b       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   522af4d60f752       nginx
	6e623a18eafec       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   0175fe514ff16       ingress-nginx-controller-7fcf777cb7-dzt62
	8b69ee22dcb68       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   f6281370ad41a       ingress-nginx-admission-patch-b7zkz
	75d3254d6f3eb       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   ee2a9eb41529e       ingress-nginx-admission-create-x79x6
	8e3e2e2544fde       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   d4cab175bfebf       coredns-66bff467f8-cbldn
	da6ffcc214401       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   96df73c2b22c7       storage-provisioner
	ee6961c869779       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   7dc1002df1dcb       kindnet-ttq8s
	a0d1e09a6d4cc       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   6255e0e8d04a2       kube-proxy-sh5sp
	7eff45dd3a2fa       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   21864005dd769       etcd-ingress-addon-legacy-038954
	bcc611d7a7699       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   1f146168c58a3       kube-controller-manager-ingress-addon-legacy-038954
	df66e978b1848       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   99818eb94da4d       kube-apiserver-ingress-addon-legacy-038954
	f8e3c3cf4da2c       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   abb557f8249dc       kube-scheduler-ingress-addon-legacy-038954
	
	
	==> coredns [8e3e2e2544fde22879b58cbe2db5c8980d94dad956d30806275d466f3c5f3497] <==
	[INFO] 10.244.0.5:45980 - 56154 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004128826s
	[INFO] 10.244.0.5:45980 - 58843 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004229197s
	[INFO] 10.244.0.5:36895 - 8471 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004706987s
	[INFO] 10.244.0.5:42255 - 17923 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00450928s
	[INFO] 10.244.0.5:52820 - 43779 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004696465s
	[INFO] 10.244.0.5:58958 - 1784 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004947082s
	[INFO] 10.244.0.5:57470 - 25221 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004549347s
	[INFO] 10.244.0.5:52755 - 39942 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00485889s
	[INFO] 10.244.0.5:47106 - 42287 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004890003s
	[INFO] 10.244.0.5:36895 - 61082 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005024363s
	[INFO] 10.244.0.5:47106 - 30359 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004826416s
	[INFO] 10.244.0.5:52755 - 24103 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004858896s
	[INFO] 10.244.0.5:52820 - 8574 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005073548s
	[INFO] 10.244.0.5:57470 - 52104 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005016213s
	[INFO] 10.244.0.5:58958 - 59102 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005086334s
	[INFO] 10.244.0.5:36895 - 64506 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000127688s
	[INFO] 10.244.0.5:57470 - 49739 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006348s
	[INFO] 10.244.0.5:42255 - 61289 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005275273s
	[INFO] 10.244.0.5:47106 - 58682 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000147124s
	[INFO] 10.244.0.5:45980 - 34470 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00550783s
	[INFO] 10.244.0.5:52820 - 9816 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00017518s
	[INFO] 10.244.0.5:52755 - 36089 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000223506s
	[INFO] 10.244.0.5:58958 - 50081 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000180223s
	[INFO] 10.244.0.5:42255 - 33867 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055103s
	[INFO] 10.244.0.5:45980 - 14907 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078688s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-038954
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-038954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=ingress-addon-legacy-038954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T21_55_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 21:55:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-038954
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 21:58:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 21:56:35 +0000   Tue, 26 Dec 2023 21:54:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 21:56:35 +0000   Tue, 26 Dec 2023 21:54:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 21:56:35 +0000   Tue, 26 Dec 2023 21:54:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 21:56:35 +0000   Tue, 26 Dec 2023 21:55:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-038954
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb20b668ffb24c2ab235a78d09c7194c
	  System UUID:                70808deb-f87a-40bd-a75a-bdab875b32c8
	  Boot ID:                    86db03b9-ef11-43ea-be40-040b33a40e54
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-7zq6v                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-cbldn                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m39s
	  kube-system                 etcd-ingress-addon-legacy-038954                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kindnet-ttq8s                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m39s
	  kube-system                 kube-apiserver-ingress-addon-legacy-038954             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-038954    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-proxy-sh5sp                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-scheduler-ingress-addon-legacy-038954             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 4m2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m2s (x5 over 4m2s)  kubelet     Node ingress-addon-legacy-038954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x4 over 4m2s)  kubelet     Node ingress-addon-legacy-038954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x4 over 4m2s)  kubelet     Node ingress-addon-legacy-038954 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m55s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m55s                kubelet     Node ingress-addon-legacy-038954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s                kubelet     Node ingress-addon-legacy-038954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s                kubelet     Node ingress-addon-legacy-038954 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m38s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m25s                kubelet     Node ingress-addon-legacy-038954 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.004919] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006671] FS-Cache: N-cookie d=00000000ddec32f3{9p.inode} n=00000000627f747c
	[  +0.008752] FS-Cache: N-key=[8] '91a00f0200000000'
	[  +0.253422] FS-Cache: Duplicate cookie detected
	[  +0.004671] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006764] FS-Cache: O-cookie d=00000000ddec32f3{9p.inode} n=000000008d706fc0
	[  +0.007354] FS-Cache: O-key=[8] '99a00f0200000000'
	[  +0.004955] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007949] FS-Cache: N-cookie d=00000000ddec32f3{9p.inode} n=00000000b5f489b5
	[  +0.007343] FS-Cache: N-key=[8] '99a00f0200000000'
	[  +4.728524] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec26 21:56] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[  +1.016201] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[  +2.019787] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[  +4.155618] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[  +8.191113] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[ +16.130329] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[Dec26 21:57] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	
	
	==> etcd [7eff45dd3a2faf0c6051c033c63ceb4bf602894812c6f3e80132603112666b7a] <==
	raft2023/12/26 21:54:58 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/26 21:54:58 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/26 21:54:58 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/26 21:54:58 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-26 21:54:58.358515 W | auth: simple token is not cryptographically signed
	2023-12-26 21:54:58.369268 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-26 21:54:58.370213 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/26 21:54:58 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-26 21:54:58.371085 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-12-26 21:54:58.372144 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-26 21:54:58.372336 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-26 21:54:58.372423 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/26 21:54:59 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/26 21:54:59 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/26 21:54:59 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/26 21:54:59 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/26 21:54:59 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-26 21:54:59.290321 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-26 21:54:59.291600 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-26 21:54:59.291661 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-26 21:54:59.291722 I | etcdserver: published {Name:ingress-addon-legacy-038954 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-26 21:54:59.291754 I | embed: ready to serve client requests
	2023-12-26 21:54:59.291772 I | embed: ready to serve client requests
	2023-12-26 21:54:59.295210 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-26 21:54:59.295406 I | embed: serving client requests on 192.168.49.2:2379
	
	
	==> kernel <==
	 21:58:59 up 41 min,  0 users,  load average: 0.14, 0.51, 0.46
	Linux ingress-addon-legacy-038954 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [ee6961c86977936e1e5bdca87271c5b2b3333d2a6ab3ff28e1aa36178181a9ce] <==
	I1226 21:56:54.519470       1 main.go:227] handling current node
	I1226 21:57:04.531577       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:57:04.531601       1 main.go:227] handling current node
	I1226 21:57:14.534872       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:57:14.534895       1 main.go:227] handling current node
	I1226 21:57:24.546881       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:57:24.546904       1 main.go:227] handling current node
	I1226 21:57:34.550949       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:57:34.550974       1 main.go:227] handling current node
	I1226 21:57:44.562096       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:57:44.562128       1 main.go:227] handling current node
	I1226 21:57:54.565778       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:57:54.565802       1 main.go:227] handling current node
	I1226 21:58:04.578384       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:58:04.578408       1 main.go:227] handling current node
	I1226 21:58:14.581698       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:58:14.581723       1 main.go:227] handling current node
	I1226 21:58:24.586282       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:58:24.586309       1 main.go:227] handling current node
	I1226 21:58:34.589987       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:58:34.590014       1 main.go:227] handling current node
	I1226 21:58:44.593026       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:58:44.593056       1 main.go:227] handling current node
	I1226 21:58:54.598606       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:58:54.598632       1 main.go:227] handling current node
	
	
	==> kube-apiserver [df66e978b18483c4b100d437b904ee43465c4b4ab9bb669ada89f608cb1de99b] <==
	E1226 21:55:01.843072       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1226 21:55:01.954841       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1226 21:55:01.954918       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1226 21:55:01.954987       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1226 21:55:01.954844       1 cache.go:39] Caches are synced for autoregister controller
	I1226 21:55:01.954848       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1226 21:55:02.839892       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1226 21:55:02.839921       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1226 21:55:02.845005       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1226 21:55:02.847568       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1226 21:55:02.847586       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1226 21:55:03.107077       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1226 21:55:03.133528       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1226 21:55:03.200739       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1226 21:55:03.201806       1 controller.go:609] quota admission added evaluator for: endpoints
	I1226 21:55:03.204816       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1226 21:55:03.565425       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1226 21:55:04.119076       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1226 21:55:04.523660       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1226 21:55:04.704443       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1226 21:55:20.621815       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1226 21:55:20.695375       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1226 21:55:45.613265       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1226 21:56:11.656654       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1226 21:58:52.059178       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [bcc611d7a76993e62201989c1f53c3de30f7315f0a020f63355ba770f354e255] <==
	E1226 21:55:20.756984       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1226 21:55:20.808287       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1226 21:55:20.881004       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1226 21:55:20.896838       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"ad09d473-fc39-4423-b91e-a2dc8d7d7c14", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1226 21:55:20.947839       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1226 21:55:20.955595       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b65ef15c-4513-4633-a4b2-cff0825ba08a", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-7gjxq
	I1226 21:55:21.098047       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1226 21:55:21.157840       1 shared_informer.go:230] Caches are synced for disruption 
	I1226 21:55:21.157992       1 disruption.go:339] Sending events to api server.
	I1226 21:55:21.157964       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1226 21:55:21.300332       1 shared_informer.go:230] Caches are synced for resource quota 
	I1226 21:55:21.354826       1 shared_informer.go:230] Caches are synced for resource quota 
	I1226 21:55:21.354831       1 shared_informer.go:230] Caches are synced for endpoint 
	I1226 21:55:21.354863       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1226 21:55:21.354886       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1226 21:55:21.354830       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1226 21:55:35.698375       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1226 21:55:45.603887       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9fabec75-c4d1-4c57-af8b-28231067fd28", APIVersion:"apps/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1226 21:55:45.611002       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"a29b026f-6b27-49ae-b333-d232f9e1ea2f", APIVersion:"apps/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-dzt62
	I1226 21:55:45.660051       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"a769005b-f679-4bb0-b96d-06a80aa83b4b", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-x79x6
	I1226 21:55:45.672717       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"a533b545-8afe-4310-8789-081ed91939e6", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-b7zkz
	I1226 21:55:49.026685       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"a533b545-8afe-4310-8789-081ed91939e6", APIVersion:"batch/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1226 21:55:49.032939       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"a769005b-f679-4bb0-b96d-06a80aa83b4b", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1226 21:58:33.356196       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"a88054c8-f0b2-4b63-b828-90a11726073f", APIVersion:"apps/v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1226 21:58:33.361308       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"22db39d9-e21e-45b1-8ef5-0423f083dd49", APIVersion:"apps/v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-7zq6v
	
	
	==> kube-proxy [a0d1e09a6d4ccbb356a760b07d196c4830250009c79120ccca0cc04136314df5] <==
	W1226 21:55:21.588947       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1226 21:55:21.659812       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1226 21:55:21.659839       1 server_others.go:186] Using iptables Proxier.
	I1226 21:55:21.660148       1 server.go:583] Version: v1.18.20
	I1226 21:55:21.660705       1 config.go:133] Starting endpoints config controller
	I1226 21:55:21.660780       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1226 21:55:21.660919       1 config.go:315] Starting service config controller
	I1226 21:55:21.660932       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1226 21:55:21.760974       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1226 21:55:21.761061       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [f8e3c3cf4da2c73daf51e6760b76a96056fba2879942fa944fcf1d5439f0dbeb] <==
	W1226 21:55:01.856568       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1226 21:55:01.856576       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1226 21:55:01.958617       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1226 21:55:01.958640       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1226 21:55:01.960982       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1226 21:55:01.961124       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 21:55:01.962302       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 21:55:01.963760       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1226 21:55:01.968488       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 21:55:01.968512       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1226 21:55:01.968609       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1226 21:55:01.968828       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1226 21:55:01.969099       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1226 21:55:01.969183       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1226 21:55:01.969279       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1226 21:55:01.970810       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1226 21:55:01.970813       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 21:55:01.970928       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 21:55:01.970957       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 21:55:01.972186       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 21:55:02.863879       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 21:55:02.955768       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 21:55:03.004526       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 21:55:03.020510       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1226 21:55:05.562610       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Dec 26 21:58:20 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:20.887531    1851 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 21:58:20 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:20.887561    1851 pod_workers.go:191] Error syncing pod 270d185a-585a-4588-85cb-151b2b6a4245 ("kube-ingress-dns-minikube_kube-system(270d185a-585a-4588-85cb-151b2b6a4245)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 26 21:58:33 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:33.366000    1851 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 26 21:58:33 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:33.475059    1851 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-txhdp" (UniqueName: "kubernetes.io/secret/7a5c36e7-fb5c-4d7d-b5a8-c17dd4e5ef51-default-token-txhdp") pod "hello-world-app-5f5d8b66bb-7zq6v" (UID: "7a5c36e7-fb5c-4d7d-b5a8-c17dd4e5ef51")
	Dec 26 21:58:33 ingress-addon-legacy-038954 kubelet[1851]: W1226 21:58:33.731570    1851 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/ae6dabd913b308b6416556feed33945e13295ceedb438d6bb86d813d3ed10a63/crio-5ac927fd00edd019d636bf567f9603136749eb5954cef697213679a87a3a7870 WatchSource:0}: Error finding container 5ac927fd00edd019d636bf567f9603136749eb5954cef697213679a87a3a7870: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000a17aa0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Dec 26 21:58:33 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:33.887317    1851 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 21:58:33 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:33.887360    1851 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 21:58:33 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:33.887407    1851 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 21:58:33 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:33.887435    1851 pod_workers.go:191] Error syncing pod 270d185a-585a-4588-85cb-151b2b6a4245 ("kube-ingress-dns-minikube_kube-system(270d185a-585a-4588-85cb-151b2b6a4245)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 26 21:58:46 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:46.887558    1851 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 21:58:46 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:46.887599    1851 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 21:58:46 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:46.887647    1851 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 21:58:46 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:46.887672    1851 pod_workers.go:191] Error syncing pod 270d185a-585a-4588-85cb-151b2b6a4245 ("kube-ingress-dns-minikube_kube-system(270d185a-585a-4588-85cb-151b2b6a4245)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 26 21:58:49 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:49.108966    1851 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-dg5qn" (UniqueName: "kubernetes.io/secret/270d185a-585a-4588-85cb-151b2b6a4245-minikube-ingress-dns-token-dg5qn") pod "270d185a-585a-4588-85cb-151b2b6a4245" (UID: "270d185a-585a-4588-85cb-151b2b6a4245")
	Dec 26 21:58:49 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:49.110925    1851 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270d185a-585a-4588-85cb-151b2b6a4245-minikube-ingress-dns-token-dg5qn" (OuterVolumeSpecName: "minikube-ingress-dns-token-dg5qn") pod "270d185a-585a-4588-85cb-151b2b6a4245" (UID: "270d185a-585a-4588-85cb-151b2b6a4245"). InnerVolumeSpecName "minikube-ingress-dns-token-dg5qn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 26 21:58:49 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:49.209310    1851 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-dg5qn" (UniqueName: "kubernetes.io/secret/270d185a-585a-4588-85cb-151b2b6a4245-minikube-ingress-dns-token-dg5qn") on node "ingress-addon-legacy-038954" DevicePath ""
	Dec 26 21:58:52 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:52.051287    1851 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dzt62.17a48093f7444c36", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dzt62", UID:"27c7b40d-3edb-41ca-8808-1048d52f604d", APIVersion:"v1", ResourceVersion:"487", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-038954"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15af10702f93436, ext:227557043343, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15af10702f93436, ext:227557043343, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dzt62.17a48093f7444c36" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 26 21:58:52 ingress-addon-legacy-038954 kubelet[1851]: E1226 21:58:52.054819    1851 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dzt62.17a48093f7444c36", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dzt62", UID:"27c7b40d-3edb-41ca-8808-1048d52f604d", APIVersion:"v1", ResourceVersion:"487", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-038954"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15af10702f93436, ext:227557043343, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15af107031bd579, ext:227559312858, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dzt62.17a48093f7444c36" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 26 21:58:54 ingress-addon-legacy-038954 kubelet[1851]: W1226 21:58:54.310752    1851 pod_container_deletor.go:77] Container "0175fe514ff16453584dd6180b42dd55d2797a28aff36a4e2fcd6bcdd65d3775" not found in pod's containers
	Dec 26 21:58:56 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:56.164199    1851 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-zdwtw" (UniqueName: "kubernetes.io/secret/27c7b40d-3edb-41ca-8808-1048d52f604d-ingress-nginx-token-zdwtw") pod "27c7b40d-3edb-41ca-8808-1048d52f604d" (UID: "27c7b40d-3edb-41ca-8808-1048d52f604d")
	Dec 26 21:58:56 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:56.164246    1851 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/27c7b40d-3edb-41ca-8808-1048d52f604d-webhook-cert") pod "27c7b40d-3edb-41ca-8808-1048d52f604d" (UID: "27c7b40d-3edb-41ca-8808-1048d52f604d")
	Dec 26 21:58:56 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:56.166024    1851 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c7b40d-3edb-41ca-8808-1048d52f604d-ingress-nginx-token-zdwtw" (OuterVolumeSpecName: "ingress-nginx-token-zdwtw") pod "27c7b40d-3edb-41ca-8808-1048d52f604d" (UID: "27c7b40d-3edb-41ca-8808-1048d52f604d"). InnerVolumeSpecName "ingress-nginx-token-zdwtw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 26 21:58:56 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:56.166197    1851 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c7b40d-3edb-41ca-8808-1048d52f604d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "27c7b40d-3edb-41ca-8808-1048d52f604d" (UID: "27c7b40d-3edb-41ca-8808-1048d52f604d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 26 21:58:56 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:56.264522    1851 reconciler.go:319] Volume detached for volume "ingress-nginx-token-zdwtw" (UniqueName: "kubernetes.io/secret/27c7b40d-3edb-41ca-8808-1048d52f604d-ingress-nginx-token-zdwtw") on node "ingress-addon-legacy-038954" DevicePath ""
	Dec 26 21:58:56 ingress-addon-legacy-038954 kubelet[1851]: I1226 21:58:56.264564    1851 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/27c7b40d-3edb-41ca-8808-1048d52f604d-webhook-cert") on node "ingress-addon-legacy-038954" DevicePath ""
	
	
	==> storage-provisioner [da6ffcc2144010ded9d073eb55129029918ad6ad99790f8228440ecf1ca19b8f] <==
	I1226 21:55:40.316651       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1226 21:55:40.356336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1226 21:55:40.356387       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1226 21:55:40.363195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1226 21:55:40.363343       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-038954_627b1185-2b36-4901-8879-acdd3005b647!
	I1226 21:55:40.364340       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f3f5d96f-4fa6-48f2-926d-a2ae003f13c7", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-038954_627b1185-2b36-4901-8879-acdd3005b647 became leader
	I1226 21:55:40.464480       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-038954_627b1185-2b36-4901-8879-acdd3005b647!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-038954 -n ingress-addon-legacy-038954
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-038954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (184.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-25lpb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-25lpb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-25lpb -- sh -c "ping -c 1 192.168.58.1": exit status 1 (179.435974ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-25lpb): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-8vrwf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-8vrwf -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-8vrwf -- sh -c "ping -c 1 192.168.58.1": exit status 1 (174.972977ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-8vrwf): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-266826
helpers_test.go:235: (dbg) docker inspect multinode-266826:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1",
	        "Created": "2023-12-26T22:04:02.58685644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 99727,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T22:04:02.8560176Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/hosts",
	        "LogPath": "/var/lib/docker/containers/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1-json.log",
	        "Name": "/multinode-266826",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-266826:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-266826",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/495bb0b3bd0e0e2111b4edc67532e5e22549ec1251e16133c2e0cdfa8d74db47-init/diff:/var/lib/docker/overlay2/9309fabaee2d1c218955e7e97c12621fc2771807097b157c41ecafdb1f7c4f26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/495bb0b3bd0e0e2111b4edc67532e5e22549ec1251e16133c2e0cdfa8d74db47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/495bb0b3bd0e0e2111b4edc67532e5e22549ec1251e16133c2e0cdfa8d74db47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/495bb0b3bd0e0e2111b4edc67532e5e22549ec1251e16133c2e0cdfa8d74db47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-266826",
	                "Source": "/var/lib/docker/volumes/multinode-266826/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-266826",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-266826",
	                "name.minikube.sigs.k8s.io": "multinode-266826",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1743d54be77603d040a5e1c64c70da86b1554836afb557bf8823f1b02af5dc25",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1743d54be776",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-266826": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ed230b7e6557",
	                        "multinode-266826"
	                    ],
	                    "NetworkID": "5cf8def1dc58d91394e837eb62028c42fc5b8a235ef8aa3c28f18a81dbf8a80f",
	                    "EndpointID": "2e69c93e48195d933cc02426e49e873a0806cb552b4b7df9c2f4a9627bb88750",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-266826 -n multinode-266826
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-266826 logs -n 25: (1.152917124s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-918870                           | mount-start-2-918870 | jenkins | v1.32.0 | 26 Dec 23 22:03 UTC | 26 Dec 23 22:03 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-918870 ssh -- ls                    | mount-start-2-918870 | jenkins | v1.32.0 | 26 Dec 23 22:03 UTC | 26 Dec 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-904576                           | mount-start-1-904576 | jenkins | v1.32.0 | 26 Dec 23 22:03 UTC | 26 Dec 23 22:03 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-918870 ssh -- ls                    | mount-start-2-918870 | jenkins | v1.32.0 | 26 Dec 23 22:03 UTC | 26 Dec 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-918870                           | mount-start-2-918870 | jenkins | v1.32.0 | 26 Dec 23 22:03 UTC | 26 Dec 23 22:03 UTC |
	| start   | -p mount-start-2-918870                           | mount-start-2-918870 | jenkins | v1.32.0 | 26 Dec 23 22:03 UTC | 26 Dec 23 22:03 UTC |
	| ssh     | mount-start-2-918870 ssh -- ls                    | mount-start-2-918870 | jenkins | v1.32.0 | 26 Dec 23 22:03 UTC | 26 Dec 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-918870                           | mount-start-2-918870 | jenkins | v1.32.0 | 26 Dec 23 22:03 UTC | 26 Dec 23 22:03 UTC |
	| delete  | -p mount-start-1-904576                           | mount-start-1-904576 | jenkins | v1.32.0 | 26 Dec 23 22:03 UTC | 26 Dec 23 22:03 UTC |
	| start   | -p multinode-266826                               | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:03 UTC | 26 Dec 23 22:05 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- apply -f                   | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- rollout                    | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- get pods -o                | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- get pods -o                | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- exec                       | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | busybox-5bc68d56bd-25lpb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- exec                       | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | busybox-5bc68d56bd-8vrwf --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- exec                       | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | busybox-5bc68d56bd-25lpb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- exec                       | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | busybox-5bc68d56bd-8vrwf --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- exec                       | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | busybox-5bc68d56bd-25lpb -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- exec                       | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | busybox-5bc68d56bd-8vrwf -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- get pods -o                | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- exec                       | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | busybox-5bc68d56bd-25lpb                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- exec                       | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC |                     |
	|         | busybox-5bc68d56bd-25lpb -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- exec                       | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC | 26 Dec 23 22:05 UTC |
	|         | busybox-5bc68d56bd-8vrwf                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-266826 -- exec                       | multinode-266826     | jenkins | v1.32.0 | 26 Dec 23 22:05 UTC |                     |
	|         | busybox-5bc68d56bd-8vrwf -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 22:03:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 22:03:56.671983   99116 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:03:56.672239   99116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:03:56.672248   99116 out.go:309] Setting ErrFile to fd 2...
	I1226 22:03:56.672252   99116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:03:56.672423   99116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	I1226 22:03:56.672973   99116 out.go:303] Setting JSON to false
	I1226 22:03:56.674151   99116 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2787,"bootTime":1703625450,"procs":682,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 22:03:56.674223   99116 start.go:138] virtualization: kvm guest
	I1226 22:03:56.676570   99116 out.go:177] * [multinode-266826] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1226 22:03:56.678079   99116 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:03:56.679594   99116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:03:56.678101   99116 notify.go:220] Checking for updates...
	I1226 22:03:56.681256   99116 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 22:03:56.682890   99116 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 22:03:56.684298   99116 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1226 22:03:56.685662   99116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:03:56.687201   99116 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:03:56.708395   99116 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:03:56.708487   99116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:03:56.760070   99116 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-26 22:03:56.751846485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 22:03:56.760156   99116 docker.go:295] overlay module found
	I1226 22:03:56.762969   99116 out.go:177] * Using the docker driver based on user configuration
	I1226 22:03:56.764333   99116 start.go:298] selected driver: docker
	I1226 22:03:56.764347   99116 start.go:902] validating driver "docker" against <nil>
	I1226 22:03:56.764357   99116 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:03:56.765150   99116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:03:56.816360   99116 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-26 22:03:56.807955251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 22:03:56.816529   99116 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 22:03:56.816797   99116 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 22:03:56.818616   99116 out.go:177] * Using Docker driver with root privileges
	I1226 22:03:56.819963   99116 cni.go:84] Creating CNI manager for ""
	I1226 22:03:56.819982   99116 cni.go:136] 0 nodes found, recommending kindnet
	I1226 22:03:56.819992   99116 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 22:03:56.820010   99116 start_flags.go:323] config:
	{Name:multinode-266826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-266826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:03:56.821575   99116 out.go:177] * Starting control plane node multinode-266826 in cluster multinode-266826
	I1226 22:03:56.822815   99116 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:03:56.824170   99116 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:03:56.825581   99116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:03:56.825607   99116 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1226 22:03:56.825621   99116 cache.go:56] Caching tarball of preloaded images
	I1226 22:03:56.825673   99116 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 22:03:56.825697   99116 preload.go:174] Found /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1226 22:03:56.825708   99116 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1226 22:03:56.826100   99116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/config.json ...
	I1226 22:03:56.826125   99116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/config.json: {Name:mkfbf42f6bff46a49a89088ada806a574b76372a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:03:56.841327   99116 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 22:03:56.841346   99116 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 22:03:56.841362   99116 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:03:56.841401   99116 start.go:365] acquiring machines lock for multinode-266826: {Name:mk1821999c782cc2e3d87bee6300b7b2705e2a15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:03:56.841491   99116 start.go:369] acquired machines lock for "multinode-266826" in 70.938µs
	I1226 22:03:56.841514   99116 start.go:93] Provisioning new machine with config: &{Name:multinode-266826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-266826 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 22:03:56.841605   99116 start.go:125] createHost starting for "" (driver="docker")
	I1226 22:03:56.843861   99116 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1226 22:03:56.844112   99116 start.go:159] libmachine.API.Create for "multinode-266826" (driver="docker")
	I1226 22:03:56.844145   99116 client.go:168] LocalClient.Create starting
	I1226 22:03:56.844213   99116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem
	I1226 22:03:56.844250   99116 main.go:141] libmachine: Decoding PEM data...
	I1226 22:03:56.844279   99116 main.go:141] libmachine: Parsing certificate...
	I1226 22:03:56.844340   99116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem
	I1226 22:03:56.844371   99116 main.go:141] libmachine: Decoding PEM data...
	I1226 22:03:56.844386   99116 main.go:141] libmachine: Parsing certificate...
	I1226 22:03:56.844720   99116 cli_runner.go:164] Run: docker network inspect multinode-266826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 22:03:56.859848   99116 cli_runner.go:211] docker network inspect multinode-266826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 22:03:56.859920   99116 network_create.go:281] running [docker network inspect multinode-266826] to gather additional debugging logs...
	I1226 22:03:56.859940   99116 cli_runner.go:164] Run: docker network inspect multinode-266826
	W1226 22:03:56.874931   99116 cli_runner.go:211] docker network inspect multinode-266826 returned with exit code 1
	I1226 22:03:56.874955   99116 network_create.go:284] error running [docker network inspect multinode-266826]: docker network inspect multinode-266826: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-266826 not found
	I1226 22:03:56.874966   99116 network_create.go:286] output of [docker network inspect multinode-266826]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-266826 not found
	
	** /stderr **
	I1226 22:03:56.875061   99116 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:03:56.890077   99116 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d85ebb5d29e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:dc:19:02:47} reservation:<nil>}
	I1226 22:03:56.890464   99116 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0025e7490}
	I1226 22:03:56.890487   99116 network_create.go:124] attempt to create docker network multinode-266826 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1226 22:03:56.890527   99116 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-266826 multinode-266826
	I1226 22:03:56.940996   99116 network_create.go:108] docker network multinode-266826 192.168.58.0/24 created
	I1226 22:03:56.941041   99116 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-266826" container
	I1226 22:03:56.941118   99116 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 22:03:56.956227   99116 cli_runner.go:164] Run: docker volume create multinode-266826 --label name.minikube.sigs.k8s.io=multinode-266826 --label created_by.minikube.sigs.k8s.io=true
	I1226 22:03:56.973176   99116 oci.go:103] Successfully created a docker volume multinode-266826
	I1226 22:03:56.973248   99116 cli_runner.go:164] Run: docker run --rm --name multinode-266826-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-266826 --entrypoint /usr/bin/test -v multinode-266826:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 22:03:57.505638   99116 oci.go:107] Successfully prepared a docker volume multinode-266826
	I1226 22:03:57.505685   99116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:03:57.505708   99116 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 22:03:57.505766   99116 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-266826:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 22:04:02.520975   99116 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-266826:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.015172787s)
	I1226 22:04:02.521012   99116 kic.go:203] duration metric: took 5.015300 seconds to extract preloaded images to volume
	W1226 22:04:02.521152   99116 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 22:04:02.521238   99116 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 22:04:02.572971   99116 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-266826 --name multinode-266826 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-266826 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-266826 --network multinode-266826 --ip 192.168.58.2 --volume multinode-266826:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 22:04:02.863480   99116 cli_runner.go:164] Run: docker container inspect multinode-266826 --format={{.State.Running}}
	I1226 22:04:02.881072   99116 cli_runner.go:164] Run: docker container inspect multinode-266826 --format={{.State.Status}}
	I1226 22:04:02.897940   99116 cli_runner.go:164] Run: docker exec multinode-266826 stat /var/lib/dpkg/alternatives/iptables
	I1226 22:04:02.961334   99116 oci.go:144] the created container "multinode-266826" has a running status.
	I1226 22:04:02.961363   99116 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa...
	I1226 22:04:03.188948   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1226 22:04:03.188996   99116 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 22:04:03.209111   99116 cli_runner.go:164] Run: docker container inspect multinode-266826 --format={{.State.Status}}
	I1226 22:04:03.230985   99116 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 22:04:03.231008   99116 kic_runner.go:114] Args: [docker exec --privileged multinode-266826 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 22:04:03.308500   99116 cli_runner.go:164] Run: docker container inspect multinode-266826 --format={{.State.Status}}
	I1226 22:04:03.325514   99116 machine.go:88] provisioning docker machine ...
	I1226 22:04:03.325547   99116 ubuntu.go:169] provisioning hostname "multinode-266826"
	I1226 22:04:03.325602   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:03.342679   99116 main.go:141] libmachine: Using SSH client type: native
	I1226 22:04:03.343020   99116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1226 22:04:03.343035   99116 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-266826 && echo "multinode-266826" | sudo tee /etc/hostname
	I1226 22:04:03.532345   99116 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-266826
	
	I1226 22:04:03.532410   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:03.548509   99116 main.go:141] libmachine: Using SSH client type: native
	I1226 22:04:03.548832   99116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1226 22:04:03.548850   99116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-266826' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-266826/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-266826' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:04:03.674389   99116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:04:03.674420   99116 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-7214/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-7214/.minikube}
	I1226 22:04:03.674441   99116 ubuntu.go:177] setting up certificates
	I1226 22:04:03.674462   99116 provision.go:83] configureAuth start
	I1226 22:04:03.674516   99116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-266826
	I1226 22:04:03.690680   99116 provision.go:138] copyHostCerts
	I1226 22:04:03.690722   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem
	I1226 22:04:03.690747   99116 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem, removing ...
	I1226 22:04:03.690756   99116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem
	I1226 22:04:03.690816   99116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem (1082 bytes)
	I1226 22:04:03.690887   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem
	I1226 22:04:03.690905   99116 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem, removing ...
	I1226 22:04:03.690911   99116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem
	I1226 22:04:03.690934   99116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem (1123 bytes)
	I1226 22:04:03.690983   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem
	I1226 22:04:03.691001   99116 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem, removing ...
	I1226 22:04:03.691007   99116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem
	I1226 22:04:03.691029   99116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem (1679 bytes)
	I1226 22:04:03.691083   99116 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem org=jenkins.multinode-266826 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-266826]
	I1226 22:04:03.935264   99116 provision.go:172] copyRemoteCerts
	I1226 22:04:03.935319   99116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:04:03.935358   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:03.951488   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa Username:docker}
	I1226 22:04:04.038573   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1226 22:04:04.038668   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:04:04.059761   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1226 22:04:04.059821   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1226 22:04:04.079752   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1226 22:04:04.079799   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 22:04:04.099556   99116 provision.go:86] duration metric: configureAuth took 425.081927ms
	I1226 22:04:04.099586   99116 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:04:04.099795   99116 config.go:182] Loaded profile config "multinode-266826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:04:04.099910   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:04.117327   99116 main.go:141] libmachine: Using SSH client type: native
	I1226 22:04:04.117666   99116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1226 22:04:04.117688   99116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:04:04.317104   99116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:04:04.317126   99116 machine.go:91] provisioned docker machine in 991.591384ms
	I1226 22:04:04.317135   99116 client.go:171] LocalClient.Create took 7.472980534s
	I1226 22:04:04.317151   99116 start.go:167] duration metric: libmachine.API.Create for "multinode-266826" took 7.473041425s
	I1226 22:04:04.317157   99116 start.go:300] post-start starting for "multinode-266826" (driver="docker")
	I1226 22:04:04.317165   99116 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:04:04.317225   99116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:04:04.317259   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:04.333536   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa Username:docker}
	I1226 22:04:04.418594   99116 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:04:04.421217   99116 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1226 22:04:04.421249   99116 command_runner.go:130] > NAME="Ubuntu"
	I1226 22:04:04.421258   99116 command_runner.go:130] > VERSION_ID="22.04"
	I1226 22:04:04.421268   99116 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1226 22:04:04.421277   99116 command_runner.go:130] > VERSION_CODENAME=jammy
	I1226 22:04:04.421288   99116 command_runner.go:130] > ID=ubuntu
	I1226 22:04:04.421295   99116 command_runner.go:130] > ID_LIKE=debian
	I1226 22:04:04.421306   99116 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1226 22:04:04.421314   99116 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1226 22:04:04.421323   99116 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1226 22:04:04.421332   99116 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1226 22:04:04.421338   99116 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1226 22:04:04.421388   99116 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:04:04.421410   99116 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:04:04.421420   99116 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:04:04.421428   99116 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 22:04:04.421437   99116 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/addons for local assets ...
	I1226 22:04:04.421476   99116 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/files for local assets ...
	I1226 22:04:04.421567   99116 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> 139762.pem in /etc/ssl/certs
	I1226 22:04:04.421579   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> /etc/ssl/certs/139762.pem
	I1226 22:04:04.421653   99116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:04:04.429013   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem --> /etc/ssl/certs/139762.pem (1708 bytes)
	I1226 22:04:04.449693   99116 start.go:303] post-start completed in 132.523636ms
	I1226 22:04:04.450027   99116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-266826
	I1226 22:04:04.467479   99116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/config.json ...
	I1226 22:04:04.467708   99116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:04:04.467754   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:04.483766   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa Username:docker}
	I1226 22:04:04.566965   99116 command_runner.go:130] > 20%!
	(MISSING)I1226 22:04:04.567109   99116 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:04:04.570948   99116 command_runner.go:130] > 234G
	I1226 22:04:04.570966   99116 start.go:128] duration metric: createHost completed in 7.729352874s
	I1226 22:04:04.570975   99116 start.go:83] releasing machines lock for "multinode-266826", held for 7.729472136s
	I1226 22:04:04.571026   99116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-266826
	I1226 22:04:04.586883   99116 ssh_runner.go:195] Run: cat /version.json
	I1226 22:04:04.586926   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:04.586966   99116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:04:04.587057   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:04.603088   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa Username:docker}
	I1226 22:04:04.604895   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa Username:docker}
	I1226 22:04:04.685823   99116 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1703498848-17857", "minikube_version": "v1.32.0", "commit": "d18dc8d014b22564d2860ddb02a821a21df70433"}
	I1226 22:04:04.685971   99116 ssh_runner.go:195] Run: systemctl --version
	I1226 22:04:04.771568   99116 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1226 22:04:04.773610   99116 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1226 22:04:04.773643   99116 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1226 22:04:04.773701   99116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:04:04.908493   99116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:04:04.912582   99116 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1226 22:04:04.912603   99116 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1226 22:04:04.912610   99116 command_runner.go:130] > Device: 33h/51d	Inode: 570038      Links: 1
	I1226 22:04:04.912618   99116 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:04:04.912628   99116 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1226 22:04:04.912641   99116 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1226 22:04:04.912653   99116 command_runner.go:130] > Change: 2023-12-26 21:44:56.364203087 +0000
	I1226 22:04:04.912664   99116 command_runner.go:130] >  Birth: 2023-12-26 21:44:56.364203087 +0000
	I1226 22:04:04.912725   99116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:04:04.930326   99116 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:04:04.930401   99116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:04:04.957170   99116 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1226 22:04:04.957204   99116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1226 22:04:04.957213   99116 start.go:475] detecting cgroup driver to use...
	I1226 22:04:04.957244   99116 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:04:04.957296   99116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:04:04.970531   99116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:04:04.979805   99116 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:04:04.979856   99116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:04:04.990973   99116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:04:05.002540   99116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 22:04:05.082068   99116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:04:05.094291   99116 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1226 22:04:05.155397   99116 docker.go:219] disabling docker service ...
	I1226 22:04:05.155467   99116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:04:05.171782   99116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:04:05.181512   99116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:04:05.191022   99116 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1226 22:04:05.264022   99116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:04:05.343170   99116 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1226 22:04:05.343243   99116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:04:05.352815   99116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:04:05.365417   99116 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1226 22:04:05.366210   99116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1226 22:04:05.366261   99116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:04:05.374243   99116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 22:04:05.374296   99116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:04:05.382191   99116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:04:05.390404   99116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:04:05.398858   99116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 22:04:05.406251   99116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 22:04:05.413032   99116 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1226 22:04:05.413080   99116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 22:04:05.419724   99116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:04:05.491632   99116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 22:04:05.598284   99116 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 22:04:05.598339   99116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 22:04:05.601463   99116 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1226 22:04:05.601487   99116 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1226 22:04:05.601510   99116 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1226 22:04:05.601522   99116 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:04:05.601533   99116 command_runner.go:130] > Access: 2023-12-26 22:04:05.585303394 +0000
	I1226 22:04:05.601541   99116 command_runner.go:130] > Modify: 2023-12-26 22:04:05.585303394 +0000
	I1226 22:04:05.601548   99116 command_runner.go:130] > Change: 2023-12-26 22:04:05.585303394 +0000
	I1226 22:04:05.601553   99116 command_runner.go:130] >  Birth: -
	I1226 22:04:05.601579   99116 start.go:543] Will wait 60s for crictl version
	I1226 22:04:05.601612   99116 ssh_runner.go:195] Run: which crictl
	I1226 22:04:05.604503   99116 command_runner.go:130] > /usr/bin/crictl
	I1226 22:04:05.604590   99116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 22:04:05.634219   99116 command_runner.go:130] > Version:  0.1.0
	I1226 22:04:05.634239   99116 command_runner.go:130] > RuntimeName:  cri-o
	I1226 22:04:05.634244   99116 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1226 22:04:05.634249   99116 command_runner.go:130] > RuntimeApiVersion:  v1
	I1226 22:04:05.635908   99116 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 22:04:05.635965   99116 ssh_runner.go:195] Run: crio --version
	I1226 22:04:05.667069   99116 command_runner.go:130] > crio version 1.24.6
	I1226 22:04:05.667091   99116 command_runner.go:130] > Version:          1.24.6
	I1226 22:04:05.667100   99116 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1226 22:04:05.667106   99116 command_runner.go:130] > GitTreeState:     clean
	I1226 22:04:05.667114   99116 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1226 22:04:05.667122   99116 command_runner.go:130] > GoVersion:        go1.18.2
	I1226 22:04:05.667129   99116 command_runner.go:130] > Compiler:         gc
	I1226 22:04:05.667137   99116 command_runner.go:130] > Platform:         linux/amd64
	I1226 22:04:05.667150   99116 command_runner.go:130] > Linkmode:         dynamic
	I1226 22:04:05.667166   99116 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1226 22:04:05.667177   99116 command_runner.go:130] > SeccompEnabled:   true
	I1226 22:04:05.667185   99116 command_runner.go:130] > AppArmorEnabled:  false
	I1226 22:04:05.668490   99116 ssh_runner.go:195] Run: crio --version
	I1226 22:04:05.698527   99116 command_runner.go:130] > crio version 1.24.6
	I1226 22:04:05.698549   99116 command_runner.go:130] > Version:          1.24.6
	I1226 22:04:05.698556   99116 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1226 22:04:05.698560   99116 command_runner.go:130] > GitTreeState:     clean
	I1226 22:04:05.698566   99116 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1226 22:04:05.698573   99116 command_runner.go:130] > GoVersion:        go1.18.2
	I1226 22:04:05.698580   99116 command_runner.go:130] > Compiler:         gc
	I1226 22:04:05.698587   99116 command_runner.go:130] > Platform:         linux/amd64
	I1226 22:04:05.698601   99116 command_runner.go:130] > Linkmode:         dynamic
	I1226 22:04:05.698613   99116 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1226 22:04:05.698620   99116 command_runner.go:130] > SeccompEnabled:   true
	I1226 22:04:05.698624   99116 command_runner.go:130] > AppArmorEnabled:  false
	I1226 22:04:05.702079   99116 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1226 22:04:05.703557   99116 cli_runner.go:164] Run: docker network inspect multinode-266826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:04:05.718991   99116 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1226 22:04:05.722171   99116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:04:05.731884   99116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:04:05.731928   99116 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 22:04:05.787647   99116 command_runner.go:130] > {
	I1226 22:04:05.787665   99116 command_runner.go:130] >   "images": [
	I1226 22:04:05.787669   99116 command_runner.go:130] >     {
	I1226 22:04:05.787678   99116 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1226 22:04:05.787683   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.787689   99116 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1226 22:04:05.787692   99116 command_runner.go:130] >       ],
	I1226 22:04:05.787697   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.787706   99116 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1226 22:04:05.787714   99116 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1226 22:04:05.787719   99116 command_runner.go:130] >       ],
	I1226 22:04:05.787724   99116 command_runner.go:130] >       "size": "65258016",
	I1226 22:04:05.787734   99116 command_runner.go:130] >       "uid": null,
	I1226 22:04:05.787738   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.787746   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.787753   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.787756   99116 command_runner.go:130] >     },
	I1226 22:04:05.787760   99116 command_runner.go:130] >     {
	I1226 22:04:05.787771   99116 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1226 22:04:05.787779   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.787785   99116 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1226 22:04:05.787791   99116 command_runner.go:130] >       ],
	I1226 22:04:05.787795   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.787806   99116 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1226 22:04:05.787816   99116 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1226 22:04:05.787822   99116 command_runner.go:130] >       ],
	I1226 22:04:05.787830   99116 command_runner.go:130] >       "size": "31470524",
	I1226 22:04:05.787837   99116 command_runner.go:130] >       "uid": null,
	I1226 22:04:05.787841   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.787848   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.787852   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.787858   99116 command_runner.go:130] >     },
	I1226 22:04:05.787862   99116 command_runner.go:130] >     {
	I1226 22:04:05.787870   99116 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1226 22:04:05.787877   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.787882   99116 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1226 22:04:05.787894   99116 command_runner.go:130] >       ],
	I1226 22:04:05.787899   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.787909   99116 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1226 22:04:05.787919   99116 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1226 22:04:05.787925   99116 command_runner.go:130] >       ],
	I1226 22:04:05.787930   99116 command_runner.go:130] >       "size": "53621675",
	I1226 22:04:05.787940   99116 command_runner.go:130] >       "uid": null,
	I1226 22:04:05.787947   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.787951   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.787958   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.787962   99116 command_runner.go:130] >     },
	I1226 22:04:05.787968   99116 command_runner.go:130] >     {
	I1226 22:04:05.787975   99116 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1226 22:04:05.787982   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.787987   99116 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1226 22:04:05.787993   99116 command_runner.go:130] >       ],
	I1226 22:04:05.787997   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.788005   99116 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1226 22:04:05.788016   99116 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1226 22:04:05.788028   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788035   99116 command_runner.go:130] >       "size": "295456551",
	I1226 22:04:05.788040   99116 command_runner.go:130] >       "uid": {
	I1226 22:04:05.788046   99116 command_runner.go:130] >         "value": "0"
	I1226 22:04:05.788050   99116 command_runner.go:130] >       },
	I1226 22:04:05.788056   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.788061   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.788067   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.788082   99116 command_runner.go:130] >     },
	I1226 22:04:05.788088   99116 command_runner.go:130] >     {
	I1226 22:04:05.788095   99116 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1226 22:04:05.788101   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.788106   99116 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1226 22:04:05.788112   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788116   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.788126   99116 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1226 22:04:05.788136   99116 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1226 22:04:05.788145   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788156   99116 command_runner.go:130] >       "size": "127226832",
	I1226 22:04:05.788162   99116 command_runner.go:130] >       "uid": {
	I1226 22:04:05.788167   99116 command_runner.go:130] >         "value": "0"
	I1226 22:04:05.788173   99116 command_runner.go:130] >       },
	I1226 22:04:05.788177   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.788183   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.788188   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.788194   99116 command_runner.go:130] >     },
	I1226 22:04:05.788197   99116 command_runner.go:130] >     {
	I1226 22:04:05.788206   99116 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1226 22:04:05.788213   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.788219   99116 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1226 22:04:05.788225   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788230   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.788241   99116 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1226 22:04:05.788251   99116 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1226 22:04:05.788257   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788263   99116 command_runner.go:130] >       "size": "123261750",
	I1226 22:04:05.788274   99116 command_runner.go:130] >       "uid": {
	I1226 22:04:05.788282   99116 command_runner.go:130] >         "value": "0"
	I1226 22:04:05.788288   99116 command_runner.go:130] >       },
	I1226 22:04:05.788292   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.788299   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.788303   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.788309   99116 command_runner.go:130] >     },
	I1226 22:04:05.788313   99116 command_runner.go:130] >     {
	I1226 22:04:05.788322   99116 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1226 22:04:05.788328   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.788334   99116 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1226 22:04:05.788339   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788344   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.788354   99116 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1226 22:04:05.788362   99116 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1226 22:04:05.788368   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788372   99116 command_runner.go:130] >       "size": "74749335",
	I1226 22:04:05.788381   99116 command_runner.go:130] >       "uid": null,
	I1226 22:04:05.788388   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.788392   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.788396   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.788400   99116 command_runner.go:130] >     },
	I1226 22:04:05.788403   99116 command_runner.go:130] >     {
	I1226 22:04:05.788411   99116 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1226 22:04:05.788418   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.788423   99116 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1226 22:04:05.788429   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788434   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.788455   99116 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1226 22:04:05.788466   99116 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1226 22:04:05.788470   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788474   99116 command_runner.go:130] >       "size": "61551410",
	I1226 22:04:05.788480   99116 command_runner.go:130] >       "uid": {
	I1226 22:04:05.788485   99116 command_runner.go:130] >         "value": "0"
	I1226 22:04:05.788490   99116 command_runner.go:130] >       },
	I1226 22:04:05.788497   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.788503   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.788508   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.788514   99116 command_runner.go:130] >     },
	I1226 22:04:05.788518   99116 command_runner.go:130] >     {
	I1226 22:04:05.788526   99116 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1226 22:04:05.788533   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.788538   99116 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1226 22:04:05.788544   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788548   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.788561   99116 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1226 22:04:05.788571   99116 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1226 22:04:05.788576   99116 command_runner.go:130] >       ],
	I1226 22:04:05.788581   99116 command_runner.go:130] >       "size": "750414",
	I1226 22:04:05.788587   99116 command_runner.go:130] >       "uid": {
	I1226 22:04:05.788591   99116 command_runner.go:130] >         "value": "65535"
	I1226 22:04:05.788597   99116 command_runner.go:130] >       },
	I1226 22:04:05.788602   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.788620   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.788631   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.788636   99116 command_runner.go:130] >     }
	I1226 22:04:05.788640   99116 command_runner.go:130] >   ]
	I1226 22:04:05.788646   99116 command_runner.go:130] > }
	I1226 22:04:05.789998   99116 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 22:04:05.790017   99116 crio.go:415] Images already preloaded, skipping extraction
	I1226 22:04:05.790067   99116 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 22:04:05.819627   99116 command_runner.go:130] > {
	I1226 22:04:05.819646   99116 command_runner.go:130] >   "images": [
	I1226 22:04:05.819650   99116 command_runner.go:130] >     {
	I1226 22:04:05.819660   99116 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1226 22:04:05.819664   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.819670   99116 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1226 22:04:05.819674   99116 command_runner.go:130] >       ],
	I1226 22:04:05.819679   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.819690   99116 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1226 22:04:05.819699   99116 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1226 22:04:05.819705   99116 command_runner.go:130] >       ],
	I1226 22:04:05.819710   99116 command_runner.go:130] >       "size": "65258016",
	I1226 22:04:05.819716   99116 command_runner.go:130] >       "uid": null,
	I1226 22:04:05.819721   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.819730   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.819737   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.819740   99116 command_runner.go:130] >     },
	I1226 22:04:05.819747   99116 command_runner.go:130] >     {
	I1226 22:04:05.819753   99116 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1226 22:04:05.819760   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.819765   99116 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1226 22:04:05.819772   99116 command_runner.go:130] >       ],
	I1226 22:04:05.819777   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.819787   99116 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1226 22:04:05.819798   99116 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1226 22:04:05.819801   99116 command_runner.go:130] >       ],
	I1226 22:04:05.819809   99116 command_runner.go:130] >       "size": "31470524",
	I1226 22:04:05.819813   99116 command_runner.go:130] >       "uid": null,
	I1226 22:04:05.819817   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.819820   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.819824   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.819828   99116 command_runner.go:130] >     },
	I1226 22:04:05.819831   99116 command_runner.go:130] >     {
	I1226 22:04:05.819837   99116 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1226 22:04:05.819843   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.819848   99116 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1226 22:04:05.819854   99116 command_runner.go:130] >       ],
	I1226 22:04:05.819859   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.819868   99116 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1226 22:04:05.819879   99116 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1226 22:04:05.819885   99116 command_runner.go:130] >       ],
	I1226 22:04:05.819890   99116 command_runner.go:130] >       "size": "53621675",
	I1226 22:04:05.819900   99116 command_runner.go:130] >       "uid": null,
	I1226 22:04:05.819904   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.819911   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.819915   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.819921   99116 command_runner.go:130] >     },
	I1226 22:04:05.819925   99116 command_runner.go:130] >     {
	I1226 22:04:05.819933   99116 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1226 22:04:05.819937   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.819944   99116 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1226 22:04:05.819948   99116 command_runner.go:130] >       ],
	I1226 22:04:05.819954   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.819961   99116 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1226 22:04:05.819970   99116 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1226 22:04:05.819982   99116 command_runner.go:130] >       ],
	I1226 22:04:05.819989   99116 command_runner.go:130] >       "size": "295456551",
	I1226 22:04:05.820000   99116 command_runner.go:130] >       "uid": {
	I1226 22:04:05.820006   99116 command_runner.go:130] >         "value": "0"
	I1226 22:04:05.820010   99116 command_runner.go:130] >       },
	I1226 22:04:05.820016   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.820021   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.820027   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.820030   99116 command_runner.go:130] >     },
	I1226 22:04:05.820036   99116 command_runner.go:130] >     {
	I1226 22:04:05.820042   99116 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1226 22:04:05.820049   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.820054   99116 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1226 22:04:05.820060   99116 command_runner.go:130] >       ],
	I1226 22:04:05.820065   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.820074   99116 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1226 22:04:05.820084   99116 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1226 22:04:05.820090   99116 command_runner.go:130] >       ],
	I1226 22:04:05.820095   99116 command_runner.go:130] >       "size": "127226832",
	I1226 22:04:05.820101   99116 command_runner.go:130] >       "uid": {
	I1226 22:04:05.820107   99116 command_runner.go:130] >         "value": "0"
	I1226 22:04:05.820113   99116 command_runner.go:130] >       },
	I1226 22:04:05.820117   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.820124   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.820128   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.820133   99116 command_runner.go:130] >     },
	I1226 22:04:05.820138   99116 command_runner.go:130] >     {
	I1226 22:04:05.820157   99116 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1226 22:04:05.820166   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.820175   99116 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1226 22:04:05.820180   99116 command_runner.go:130] >       ],
	I1226 22:04:05.820185   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.820194   99116 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1226 22:04:05.820204   99116 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1226 22:04:05.820210   99116 command_runner.go:130] >       ],
	I1226 22:04:05.820214   99116 command_runner.go:130] >       "size": "123261750",
	I1226 22:04:05.820220   99116 command_runner.go:130] >       "uid": {
	I1226 22:04:05.820224   99116 command_runner.go:130] >         "value": "0"
	I1226 22:04:05.820239   99116 command_runner.go:130] >       },
	I1226 22:04:05.820246   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.820250   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.820257   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.820261   99116 command_runner.go:130] >     },
	I1226 22:04:05.820267   99116 command_runner.go:130] >     {
	I1226 22:04:05.820273   99116 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1226 22:04:05.820279   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.820285   99116 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1226 22:04:05.820291   99116 command_runner.go:130] >       ],
	I1226 22:04:05.820295   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.820304   99116 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1226 22:04:05.820314   99116 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1226 22:04:05.820319   99116 command_runner.go:130] >       ],
	I1226 22:04:05.820324   99116 command_runner.go:130] >       "size": "74749335",
	I1226 22:04:05.820330   99116 command_runner.go:130] >       "uid": null,
	I1226 22:04:05.820334   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.820341   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.820347   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.820354   99116 command_runner.go:130] >     },
	I1226 22:04:05.820357   99116 command_runner.go:130] >     {
	I1226 22:04:05.820367   99116 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1226 22:04:05.820374   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.820379   99116 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1226 22:04:05.820385   99116 command_runner.go:130] >       ],
	I1226 22:04:05.820389   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.820410   99116 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1226 22:04:05.820425   99116 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1226 22:04:05.820434   99116 command_runner.go:130] >       ],
	I1226 22:04:05.820441   99116 command_runner.go:130] >       "size": "61551410",
	I1226 22:04:05.820445   99116 command_runner.go:130] >       "uid": {
	I1226 22:04:05.820452   99116 command_runner.go:130] >         "value": "0"
	I1226 22:04:05.820455   99116 command_runner.go:130] >       },
	I1226 22:04:05.820461   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.820465   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.820472   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.820478   99116 command_runner.go:130] >     },
	I1226 22:04:05.820484   99116 command_runner.go:130] >     {
	I1226 22:04:05.820490   99116 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1226 22:04:05.820497   99116 command_runner.go:130] >       "repoTags": [
	I1226 22:04:05.820501   99116 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1226 22:04:05.820508   99116 command_runner.go:130] >       ],
	I1226 22:04:05.820512   99116 command_runner.go:130] >       "repoDigests": [
	I1226 22:04:05.820519   99116 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1226 22:04:05.820530   99116 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1226 22:04:05.820536   99116 command_runner.go:130] >       ],
	I1226 22:04:05.820541   99116 command_runner.go:130] >       "size": "750414",
	I1226 22:04:05.820546   99116 command_runner.go:130] >       "uid": {
	I1226 22:04:05.820551   99116 command_runner.go:130] >         "value": "65535"
	I1226 22:04:05.820557   99116 command_runner.go:130] >       },
	I1226 22:04:05.820561   99116 command_runner.go:130] >       "username": "",
	I1226 22:04:05.820565   99116 command_runner.go:130] >       "spec": null,
	I1226 22:04:05.820571   99116 command_runner.go:130] >       "pinned": false
	I1226 22:04:05.820575   99116 command_runner.go:130] >     }
	I1226 22:04:05.820582   99116 command_runner.go:130] >   ]
	I1226 22:04:05.820587   99116 command_runner.go:130] > }
	I1226 22:04:05.821754   99116 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 22:04:05.821771   99116 cache_images.go:84] Images are preloaded, skipping loading
	I1226 22:04:05.821817   99116 ssh_runner.go:195] Run: crio config
	I1226 22:04:05.857131   99116 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1226 22:04:05.857170   99116 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1226 22:04:05.857180   99116 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1226 22:04:05.857186   99116 command_runner.go:130] > #
	I1226 22:04:05.857201   99116 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1226 22:04:05.857211   99116 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1226 22:04:05.857223   99116 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1226 22:04:05.857238   99116 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1226 22:04:05.857243   99116 command_runner.go:130] > # reload'.
	I1226 22:04:05.857259   99116 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1226 22:04:05.857271   99116 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1226 22:04:05.857281   99116 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1226 22:04:05.857297   99116 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1226 22:04:05.857308   99116 command_runner.go:130] > [crio]
	I1226 22:04:05.857319   99116 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1226 22:04:05.857329   99116 command_runner.go:130] > # containers images, in this directory.
	I1226 22:04:05.857346   99116 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1226 22:04:05.857361   99116 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1226 22:04:05.857373   99116 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1226 22:04:05.857385   99116 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1226 22:04:05.857399   99116 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1226 22:04:05.857407   99116 command_runner.go:130] > # storage_driver = "vfs"
	I1226 22:04:05.857417   99116 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1226 22:04:05.857429   99116 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1226 22:04:05.857437   99116 command_runner.go:130] > # storage_option = [
	I1226 22:04:05.857444   99116 command_runner.go:130] > # ]
	I1226 22:04:05.857460   99116 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1226 22:04:05.857475   99116 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1226 22:04:05.857487   99116 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1226 22:04:05.857498   99116 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1226 22:04:05.857516   99116 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1226 22:04:05.857528   99116 command_runner.go:130] > # always happen on a node reboot
	I1226 22:04:05.857537   99116 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1226 22:04:05.857549   99116 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1226 22:04:05.857577   99116 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1226 22:04:05.857602   99116 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1226 22:04:05.857620   99116 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1226 22:04:05.857638   99116 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1226 22:04:05.857653   99116 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1226 22:04:05.857663   99116 command_runner.go:130] > # internal_wipe = true
	I1226 22:04:05.857678   99116 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1226 22:04:05.857695   99116 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1226 22:04:05.857709   99116 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1226 22:04:05.857720   99116 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1226 22:04:05.857730   99116 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1226 22:04:05.857738   99116 command_runner.go:130] > [crio.api]
	I1226 22:04:05.857751   99116 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1226 22:04:05.857762   99116 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1226 22:04:05.857779   99116 command_runner.go:130] > # IP address on which the stream server will listen.
	I1226 22:04:05.857793   99116 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1226 22:04:05.857805   99116 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1226 22:04:05.857815   99116 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1226 22:04:05.857823   99116 command_runner.go:130] > # stream_port = "0"
	I1226 22:04:05.857834   99116 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1226 22:04:05.857849   99116 command_runner.go:130] > # stream_enable_tls = false
	I1226 22:04:05.857878   99116 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1226 22:04:05.857888   99116 command_runner.go:130] > # stream_idle_timeout = ""
	I1226 22:04:05.857900   99116 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1226 22:04:05.857914   99116 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1226 22:04:05.857924   99116 command_runner.go:130] > # minutes.
	I1226 22:04:05.857932   99116 command_runner.go:130] > # stream_tls_cert = ""
	I1226 22:04:05.857946   99116 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1226 22:04:05.857960   99116 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1226 22:04:05.857971   99116 command_runner.go:130] > # stream_tls_key = ""
	I1226 22:04:05.857981   99116 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1226 22:04:05.857995   99116 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1226 22:04:05.858012   99116 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1226 22:04:05.858023   99116 command_runner.go:130] > # stream_tls_ca = ""
	I1226 22:04:05.858038   99116 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1226 22:04:05.858051   99116 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1226 22:04:05.858067   99116 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1226 22:04:05.858080   99116 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1226 22:04:05.858119   99116 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1226 22:04:05.858134   99116 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1226 22:04:05.858143   99116 command_runner.go:130] > [crio.runtime]
	I1226 22:04:05.858154   99116 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1226 22:04:05.858167   99116 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1226 22:04:05.858178   99116 command_runner.go:130] > # "nofile=1024:2048"
	I1226 22:04:05.858191   99116 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1226 22:04:05.858202   99116 command_runner.go:130] > # default_ulimits = [
	I1226 22:04:05.858209   99116 command_runner.go:130] > # ]
	I1226 22:04:05.858223   99116 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1226 22:04:05.858232   99116 command_runner.go:130] > # no_pivot = false
	I1226 22:04:05.858240   99116 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1226 22:04:05.858260   99116 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1226 22:04:05.858272   99116 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1226 22:04:05.858279   99116 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1226 22:04:05.858287   99116 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1226 22:04:05.858295   99116 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1226 22:04:05.858300   99116 command_runner.go:130] > # conmon = ""
	I1226 22:04:05.858306   99116 command_runner.go:130] > # Cgroup setting for conmon
	I1226 22:04:05.858315   99116 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1226 22:04:05.858320   99116 command_runner.go:130] > conmon_cgroup = "pod"
	I1226 22:04:05.858327   99116 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1226 22:04:05.858334   99116 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1226 22:04:05.858345   99116 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1226 22:04:05.858350   99116 command_runner.go:130] > # conmon_env = [
	I1226 22:04:05.858355   99116 command_runner.go:130] > # ]
	I1226 22:04:05.858362   99116 command_runner.go:130] > # Additional environment variables to set for all the
	I1226 22:04:05.858368   99116 command_runner.go:130] > # containers. These are overridden if set in the
	I1226 22:04:05.858375   99116 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1226 22:04:05.858380   99116 command_runner.go:130] > # default_env = [
	I1226 22:04:05.858387   99116 command_runner.go:130] > # ]
	I1226 22:04:05.858395   99116 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1226 22:04:05.858401   99116 command_runner.go:130] > # selinux = false
	I1226 22:04:05.858411   99116 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1226 22:04:05.858423   99116 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1226 22:04:05.858430   99116 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1226 22:04:05.858438   99116 command_runner.go:130] > # seccomp_profile = ""
	I1226 22:04:05.858445   99116 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1226 22:04:05.858455   99116 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1226 22:04:05.858463   99116 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1226 22:04:05.858472   99116 command_runner.go:130] > # which might increase security.
	I1226 22:04:05.858478   99116 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1226 22:04:05.858491   99116 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1226 22:04:05.858503   99116 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1226 22:04:05.858516   99116 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1226 22:04:05.858527   99116 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1226 22:04:05.858538   99116 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:04:05.858549   99116 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1226 22:04:05.858566   99116 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1226 22:04:05.858577   99116 command_runner.go:130] > # the cgroup blockio controller.
	I1226 22:04:05.858587   99116 command_runner.go:130] > # blockio_config_file = ""
	I1226 22:04:05.858600   99116 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1226 22:04:05.858610   99116 command_runner.go:130] > # irqbalance daemon.
	I1226 22:04:05.858620   99116 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1226 22:04:05.858634   99116 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1226 22:04:05.858646   99116 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:04:05.858672   99116 command_runner.go:130] > # rdt_config_file = ""
	I1226 22:04:05.858685   99116 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1226 22:04:05.858695   99116 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1226 22:04:05.858707   99116 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1226 22:04:05.858718   99116 command_runner.go:130] > # separate_pull_cgroup = ""
	I1226 22:04:05.858732   99116 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1226 22:04:05.858745   99116 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1226 22:04:05.858755   99116 command_runner.go:130] > # will be added.
	I1226 22:04:05.858765   99116 command_runner.go:130] > # default_capabilities = [
	I1226 22:04:05.858774   99116 command_runner.go:130] > # 	"CHOWN",
	I1226 22:04:05.858787   99116 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1226 22:04:05.858797   99116 command_runner.go:130] > # 	"FSETID",
	I1226 22:04:05.858806   99116 command_runner.go:130] > # 	"FOWNER",
	I1226 22:04:05.858816   99116 command_runner.go:130] > # 	"SETGID",
	I1226 22:04:05.858826   99116 command_runner.go:130] > # 	"SETUID",
	I1226 22:04:05.858833   99116 command_runner.go:130] > # 	"SETPCAP",
	I1226 22:04:05.858842   99116 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1226 22:04:05.858857   99116 command_runner.go:130] > # 	"KILL",
	I1226 22:04:05.858867   99116 command_runner.go:130] > # ]
	I1226 22:04:05.858882   99116 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1226 22:04:05.858897   99116 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1226 22:04:05.858908   99116 command_runner.go:130] > # add_inheritable_capabilities = true
	I1226 22:04:05.858927   99116 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1226 22:04:05.858939   99116 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1226 22:04:05.858949   99116 command_runner.go:130] > # default_sysctls = [
	I1226 22:04:05.858958   99116 command_runner.go:130] > # ]
	I1226 22:04:05.858969   99116 command_runner.go:130] > # List of devices on the host that a
	I1226 22:04:05.858983   99116 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1226 22:04:05.858998   99116 command_runner.go:130] > # allowed_devices = [
	I1226 22:04:05.859008   99116 command_runner.go:130] > # 	"/dev/fuse",
	I1226 22:04:05.859013   99116 command_runner.go:130] > # ]
	I1226 22:04:05.859025   99116 command_runner.go:130] > # List of additional devices. specified as
	I1226 22:04:05.859068   99116 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1226 22:04:05.859079   99116 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1226 22:04:05.859090   99116 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1226 22:04:05.859100   99116 command_runner.go:130] > # additional_devices = [
	I1226 22:04:05.859105   99116 command_runner.go:130] > # ]
	I1226 22:04:05.859115   99116 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1226 22:04:05.859123   99116 command_runner.go:130] > # cdi_spec_dirs = [
	I1226 22:04:05.859132   99116 command_runner.go:130] > # 	"/etc/cdi",
	I1226 22:04:05.859140   99116 command_runner.go:130] > # 	"/var/run/cdi",
	I1226 22:04:05.859154   99116 command_runner.go:130] > # ]
	I1226 22:04:05.859166   99116 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1226 22:04:05.859178   99116 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1226 22:04:05.859187   99116 command_runner.go:130] > # Defaults to false.
	I1226 22:04:05.859197   99116 command_runner.go:130] > # device_ownership_from_security_context = false
	I1226 22:04:05.859213   99116 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1226 22:04:05.859225   99116 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1226 22:04:05.859233   99116 command_runner.go:130] > # hooks_dir = [
	I1226 22:04:05.859243   99116 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1226 22:04:05.859251   99116 command_runner.go:130] > # ]
	I1226 22:04:05.859266   99116 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1226 22:04:05.859279   99116 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1226 22:04:05.859289   99116 command_runner.go:130] > # its default mounts from the following two files:
	I1226 22:04:05.859297   99116 command_runner.go:130] > #
	I1226 22:04:05.859309   99116 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1226 22:04:05.859321   99116 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1226 22:04:05.859333   99116 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1226 22:04:05.859341   99116 command_runner.go:130] > #
	I1226 22:04:05.859353   99116 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1226 22:04:05.859365   99116 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1226 22:04:05.859378   99116 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1226 22:04:05.859389   99116 command_runner.go:130] > #      only add mounts it finds in this file.
	I1226 22:04:05.859397   99116 command_runner.go:130] > #
	I1226 22:04:05.859409   99116 command_runner.go:130] > # default_mounts_file = ""
	I1226 22:04:05.859420   99116 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1226 22:04:05.859434   99116 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1226 22:04:05.859443   99116 command_runner.go:130] > # pids_limit = 0
	I1226 22:04:05.859455   99116 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1226 22:04:05.859468   99116 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1226 22:04:05.859481   99116 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1226 22:04:05.859497   99116 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1226 22:04:05.859507   99116 command_runner.go:130] > # log_size_max = -1
	I1226 22:04:05.859521   99116 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1226 22:04:05.859531   99116 command_runner.go:130] > # log_to_journald = false
	I1226 22:04:05.859544   99116 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1226 22:04:05.859555   99116 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1226 22:04:05.859567   99116 command_runner.go:130] > # Path to directory for container attach sockets.
	I1226 22:04:05.859579   99116 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1226 22:04:05.859592   99116 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1226 22:04:05.859607   99116 command_runner.go:130] > # bind_mount_prefix = ""
	I1226 22:04:05.859619   99116 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1226 22:04:05.859630   99116 command_runner.go:130] > # read_only = false
	I1226 22:04:05.859644   99116 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1226 22:04:05.859658   99116 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1226 22:04:05.859668   99116 command_runner.go:130] > # live configuration reload.
	I1226 22:04:05.859677   99116 command_runner.go:130] > # log_level = "info"
	I1226 22:04:05.859688   99116 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1226 22:04:05.859699   99116 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:04:05.859709   99116 command_runner.go:130] > # log_filter = ""
	I1226 22:04:05.859721   99116 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1226 22:04:05.859734   99116 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1226 22:04:05.859747   99116 command_runner.go:130] > # separated by comma.
	I1226 22:04:05.859755   99116 command_runner.go:130] > # uid_mappings = ""
	I1226 22:04:05.859768   99116 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1226 22:04:05.859784   99116 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1226 22:04:05.859793   99116 command_runner.go:130] > # separated by comma.
	I1226 22:04:05.859801   99116 command_runner.go:130] > # gid_mappings = ""
	I1226 22:04:05.859814   99116 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1226 22:04:05.859826   99116 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1226 22:04:05.859843   99116 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1226 22:04:05.859859   99116 command_runner.go:130] > # minimum_mappable_uid = -1
	I1226 22:04:05.859872   99116 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1226 22:04:05.859884   99116 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1226 22:04:05.859896   99116 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1226 22:04:05.859905   99116 command_runner.go:130] > # minimum_mappable_gid = -1
	I1226 22:04:05.859917   99116 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1226 22:04:05.859929   99116 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1226 22:04:05.859940   99116 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1226 22:04:05.859949   99116 command_runner.go:130] > # ctr_stop_timeout = 30
	I1226 22:04:05.859958   99116 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1226 22:04:05.859974   99116 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1226 22:04:05.859986   99116 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1226 22:04:05.859998   99116 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1226 22:04:05.860008   99116 command_runner.go:130] > # drop_infra_ctr = true
	I1226 22:04:05.860019   99116 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1226 22:04:05.860031   99116 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1226 22:04:05.860046   99116 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1226 22:04:05.860059   99116 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1226 22:04:05.860071   99116 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1226 22:04:05.860081   99116 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1226 22:04:05.860090   99116 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1226 22:04:05.860105   99116 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1226 22:04:05.860114   99116 command_runner.go:130] > # pinns_path = ""
	I1226 22:04:05.860125   99116 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1226 22:04:05.860137   99116 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1226 22:04:05.860149   99116 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1226 22:04:05.860159   99116 command_runner.go:130] > # default_runtime = "runc"
	I1226 22:04:05.860168   99116 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1226 22:04:05.860184   99116 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1226 22:04:05.860201   99116 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1226 22:04:05.860212   99116 command_runner.go:130] > # creation as a file is not desired either.
	I1226 22:04:05.860227   99116 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1226 22:04:05.860234   99116 command_runner.go:130] > # the hostname is being managed dynamically.
	I1226 22:04:05.860239   99116 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1226 22:04:05.860243   99116 command_runner.go:130] > # ]
	I1226 22:04:05.860259   99116 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1226 22:04:05.860268   99116 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1226 22:04:05.860275   99116 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1226 22:04:05.860283   99116 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1226 22:04:05.860287   99116 command_runner.go:130] > #
	I1226 22:04:05.860294   99116 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1226 22:04:05.860300   99116 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1226 22:04:05.860306   99116 command_runner.go:130] > #  runtime_type = "oci"
	I1226 22:04:05.860311   99116 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1226 22:04:05.860318   99116 command_runner.go:130] > #  privileged_without_host_devices = false
	I1226 22:04:05.860322   99116 command_runner.go:130] > #  allowed_annotations = []
	I1226 22:04:05.860328   99116 command_runner.go:130] > # Where:
	I1226 22:04:05.860334   99116 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1226 22:04:05.860343   99116 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1226 22:04:05.860353   99116 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1226 22:04:05.860366   99116 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1226 22:04:05.860375   99116 command_runner.go:130] > #   in $PATH.
	I1226 22:04:05.860388   99116 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1226 22:04:05.860400   99116 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1226 22:04:05.860414   99116 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1226 22:04:05.860424   99116 command_runner.go:130] > #   state.
	I1226 22:04:05.860435   99116 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1226 22:04:05.860447   99116 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1226 22:04:05.860460   99116 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1226 22:04:05.860473   99116 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1226 22:04:05.860486   99116 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1226 22:04:05.860500   99116 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1226 22:04:05.860511   99116 command_runner.go:130] > #   The currently recognized values are:
	I1226 22:04:05.860524   99116 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1226 22:04:05.860536   99116 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1226 22:04:05.860547   99116 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1226 22:04:05.860561   99116 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1226 22:04:05.860576   99116 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1226 22:04:05.860593   99116 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1226 22:04:05.860606   99116 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1226 22:04:05.860619   99116 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1226 22:04:05.860632   99116 command_runner.go:130] > #   should be moved to the container's cgroup
	I1226 22:04:05.860643   99116 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1226 22:04:05.860655   99116 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1226 22:04:05.860665   99116 command_runner.go:130] > runtime_type = "oci"
	I1226 22:04:05.860675   99116 command_runner.go:130] > runtime_root = "/run/runc"
	I1226 22:04:05.860683   99116 command_runner.go:130] > runtime_config_path = ""
	I1226 22:04:05.860693   99116 command_runner.go:130] > monitor_path = ""
	I1226 22:04:05.860702   99116 command_runner.go:130] > monitor_cgroup = ""
	I1226 22:04:05.860710   99116 command_runner.go:130] > monitor_exec_cgroup = ""
	I1226 22:04:05.860793   99116 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1226 22:04:05.860808   99116 command_runner.go:130] > # running containers
	I1226 22:04:05.860816   99116 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1226 22:04:05.860833   99116 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1226 22:04:05.860848   99116 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1226 22:04:05.860864   99116 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1226 22:04:05.860876   99116 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1226 22:04:05.860884   99116 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1226 22:04:05.860894   99116 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1226 22:04:05.860908   99116 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1226 22:04:05.860922   99116 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1226 22:04:05.860933   99116 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1226 22:04:05.860946   99116 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1226 22:04:05.860958   99116 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1226 22:04:05.860969   99116 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1226 22:04:05.860982   99116 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1226 22:04:05.860998   99116 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1226 22:04:05.861011   99116 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1226 22:04:05.861029   99116 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1226 22:04:05.861044   99116 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1226 22:04:05.861057   99116 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1226 22:04:05.861070   99116 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1226 22:04:05.861079   99116 command_runner.go:130] > # Example:
	I1226 22:04:05.861088   99116 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1226 22:04:05.861103   99116 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1226 22:04:05.861115   99116 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1226 22:04:05.861126   99116 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1226 22:04:05.861138   99116 command_runner.go:130] > # cpuset = 0
	I1226 22:04:05.861147   99116 command_runner.go:130] > # cpushares = "0-1"
	I1226 22:04:05.861155   99116 command_runner.go:130] > # Where:
	I1226 22:04:05.861166   99116 command_runner.go:130] > # The workload name is workload-type.
	I1226 22:04:05.861187   99116 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1226 22:04:05.861199   99116 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1226 22:04:05.861211   99116 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1226 22:04:05.861226   99116 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1226 22:04:05.861235   99116 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1226 22:04:05.861242   99116 command_runner.go:130] > # 
	I1226 22:04:05.861257   99116 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1226 22:04:05.861265   99116 command_runner.go:130] > #
	I1226 22:04:05.861278   99116 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1226 22:04:05.861291   99116 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1226 22:04:05.861305   99116 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1226 22:04:05.861318   99116 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1226 22:04:05.861330   99116 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1226 22:04:05.861340   99116 command_runner.go:130] > [crio.image]
	I1226 22:04:05.861353   99116 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1226 22:04:05.861363   99116 command_runner.go:130] > # default_transport = "docker://"
	I1226 22:04:05.861376   99116 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1226 22:04:05.861389   99116 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1226 22:04:05.861398   99116 command_runner.go:130] > # global_auth_file = ""
	I1226 22:04:05.861406   99116 command_runner.go:130] > # The image used to instantiate infra containers.
	I1226 22:04:05.861417   99116 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:04:05.861429   99116 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1226 22:04:05.861451   99116 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1226 22:04:05.861461   99116 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1226 22:04:05.861473   99116 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:04:05.861484   99116 command_runner.go:130] > # pause_image_auth_file = ""
	I1226 22:04:05.861497   99116 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1226 22:04:05.861510   99116 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1226 22:04:05.861523   99116 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1226 22:04:05.861536   99116 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1226 22:04:05.861546   99116 command_runner.go:130] > # pause_command = "/pause"
	I1226 22:04:05.861559   99116 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1226 22:04:05.861572   99116 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1226 22:04:05.861581   99116 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1226 22:04:05.861590   99116 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1226 22:04:05.861596   99116 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1226 22:04:05.861606   99116 command_runner.go:130] > # signature_policy = ""
	I1226 22:04:05.861624   99116 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1226 22:04:05.861637   99116 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1226 22:04:05.861647   99116 command_runner.go:130] > # changing them here.
	I1226 22:04:05.861657   99116 command_runner.go:130] > # insecure_registries = [
	I1226 22:04:05.861666   99116 command_runner.go:130] > # ]
	I1226 22:04:05.861676   99116 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1226 22:04:05.861682   99116 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1226 22:04:05.861689   99116 command_runner.go:130] > # image_volumes = "mkdir"
	I1226 22:04:05.861694   99116 command_runner.go:130] > # Temporary directory to use for storing big files
	I1226 22:04:05.861701   99116 command_runner.go:130] > # big_files_temporary_dir = ""
	I1226 22:04:05.861707   99116 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1226 22:04:05.861713   99116 command_runner.go:130] > # CNI plugins.
	I1226 22:04:05.861717   99116 command_runner.go:130] > [crio.network]
	I1226 22:04:05.861728   99116 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1226 22:04:05.861736   99116 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1226 22:04:05.861744   99116 command_runner.go:130] > # cni_default_network = ""
	I1226 22:04:05.861757   99116 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1226 22:04:05.861768   99116 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1226 22:04:05.861781   99116 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1226 22:04:05.861791   99116 command_runner.go:130] > # plugin_dirs = [
	I1226 22:04:05.861800   99116 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1226 22:04:05.861809   99116 command_runner.go:130] > # ]
	I1226 22:04:05.861819   99116 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1226 22:04:05.861826   99116 command_runner.go:130] > [crio.metrics]
	I1226 22:04:05.861831   99116 command_runner.go:130] > # Globally enable or disable metrics support.
	I1226 22:04:05.861838   99116 command_runner.go:130] > # enable_metrics = false
	I1226 22:04:05.861843   99116 command_runner.go:130] > # Specify enabled metrics collectors.
	I1226 22:04:05.861854   99116 command_runner.go:130] > # Per default all metrics are enabled.
	I1226 22:04:05.861862   99116 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1226 22:04:05.861870   99116 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1226 22:04:05.861878   99116 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1226 22:04:05.861888   99116 command_runner.go:130] > # metrics_collectors = [
	I1226 22:04:05.861894   99116 command_runner.go:130] > # 	"operations",
	I1226 22:04:05.861899   99116 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1226 22:04:05.861906   99116 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1226 22:04:05.861910   99116 command_runner.go:130] > # 	"operations_errors",
	I1226 22:04:05.861916   99116 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1226 22:04:05.861921   99116 command_runner.go:130] > # 	"image_pulls_by_name",
	I1226 22:04:05.861927   99116 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1226 22:04:05.861932   99116 command_runner.go:130] > # 	"image_pulls_failures",
	I1226 22:04:05.861938   99116 command_runner.go:130] > # 	"image_pulls_successes",
	I1226 22:04:05.861948   99116 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1226 22:04:05.861953   99116 command_runner.go:130] > # 	"image_layer_reuse",
	I1226 22:04:05.861959   99116 command_runner.go:130] > # 	"containers_oom_total",
	I1226 22:04:05.861963   99116 command_runner.go:130] > # 	"containers_oom",
	I1226 22:04:05.861971   99116 command_runner.go:130] > # 	"processes_defunct",
	I1226 22:04:05.861981   99116 command_runner.go:130] > # 	"operations_total",
	I1226 22:04:05.861992   99116 command_runner.go:130] > # 	"operations_latency_seconds",
	I1226 22:04:05.862004   99116 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1226 22:04:05.862012   99116 command_runner.go:130] > # 	"operations_errors_total",
	I1226 22:04:05.862017   99116 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1226 22:04:05.862024   99116 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1226 22:04:05.862029   99116 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1226 22:04:05.862035   99116 command_runner.go:130] > # 	"image_pulls_success_total",
	I1226 22:04:05.862040   99116 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1226 22:04:05.862045   99116 command_runner.go:130] > # 	"containers_oom_count_total",
	I1226 22:04:05.862051   99116 command_runner.go:130] > # ]
	I1226 22:04:05.862057   99116 command_runner.go:130] > # The port on which the metrics server will listen.
	I1226 22:04:05.862063   99116 command_runner.go:130] > # metrics_port = 9090
	I1226 22:04:05.862069   99116 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1226 22:04:05.862075   99116 command_runner.go:130] > # metrics_socket = ""
	I1226 22:04:05.862080   99116 command_runner.go:130] > # The certificate for the secure metrics server.
	I1226 22:04:05.862088   99116 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1226 22:04:05.862097   99116 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1226 22:04:05.862101   99116 command_runner.go:130] > # certificate on any modification event.
	I1226 22:04:05.862108   99116 command_runner.go:130] > # metrics_cert = ""
	I1226 22:04:05.862113   99116 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1226 22:04:05.862123   99116 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1226 22:04:05.862130   99116 command_runner.go:130] > # metrics_key = ""
	I1226 22:04:05.862136   99116 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1226 22:04:05.862142   99116 command_runner.go:130] > [crio.tracing]
	I1226 22:04:05.862148   99116 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1226 22:04:05.862154   99116 command_runner.go:130] > # enable_tracing = false
	I1226 22:04:05.862160   99116 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1226 22:04:05.862167   99116 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1226 22:04:05.862175   99116 command_runner.go:130] > # Number of samples to collect per million spans.
	I1226 22:04:05.862182   99116 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1226 22:04:05.862187   99116 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1226 22:04:05.862193   99116 command_runner.go:130] > [crio.stats]
	I1226 22:04:05.862199   99116 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1226 22:04:05.862207   99116 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1226 22:04:05.862214   99116 command_runner.go:130] > # stats_collection_period = 0
	I1226 22:04:05.862250   99116 command_runner.go:130] ! time="2023-12-26 22:04:05.854413194Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1226 22:04:05.862266   99116 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1226 22:04:05.862343   99116 cni.go:84] Creating CNI manager for ""
	I1226 22:04:05.862355   99116 cni.go:136] 1 nodes found, recommending kindnet
	I1226 22:04:05.862382   99116 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 22:04:05.862401   99116 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-266826 NodeName:multinode-266826 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 22:04:05.862533   99116 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-266826"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 22:04:05.862607   99116 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-266826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-266826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 22:04:05.862676   99116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 22:04:05.869719   99116 command_runner.go:130] > kubeadm
	I1226 22:04:05.869739   99116 command_runner.go:130] > kubectl
	I1226 22:04:05.869744   99116 command_runner.go:130] > kubelet
	I1226 22:04:05.870309   99116 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 22:04:05.870359   99116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 22:04:05.877355   99116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1226 22:04:05.892245   99116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 22:04:05.907094   99116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1226 22:04:05.921498   99116 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1226 22:04:05.924300   99116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:04:05.933272   99116 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826 for IP: 192.168.58.2
	I1226 22:04:05.933299   99116 certs.go:190] acquiring lock for shared ca certs: {Name:mk3336638bd66053c32b2c7f6f2d1c6a563fd761 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:04:05.933409   99116 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key
	I1226 22:04:05.933454   99116 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key
	I1226 22:04:05.933493   99116 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.key
	I1226 22:04:05.933504   99116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.crt with IP's: []
	I1226 22:04:06.047493   99116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.crt ...
	I1226 22:04:06.047523   99116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.crt: {Name:mk7693b6114e42ed35e7e5df85fcca20b6aee5eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:04:06.047698   99116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.key ...
	I1226 22:04:06.047711   99116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.key: {Name:mkd7e18a99809b1e9d4061738f855b58b55cb906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:04:06.047807   99116 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.key.cee25041
	I1226 22:04:06.047825   99116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 22:04:06.151160   99116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.crt.cee25041 ...
	I1226 22:04:06.151190   99116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.crt.cee25041: {Name:mk529afb4e0dafb5e9eb25a95b76255b4741c7f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:04:06.151354   99116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.key.cee25041 ...
	I1226 22:04:06.151370   99116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.key.cee25041: {Name:mkdc275cbe2ead70e220d9ebc8df7ae67f65111a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:04:06.151460   99116 certs.go:337] copying /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.crt
	I1226 22:04:06.151546   99116 certs.go:341] copying /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.key
	I1226 22:04:06.151599   99116 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/proxy-client.key
	I1226 22:04:06.151615   99116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/proxy-client.crt with IP's: []
	I1226 22:04:06.330167   99116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/proxy-client.crt ...
	I1226 22:04:06.330196   99116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/proxy-client.crt: {Name:mke5550addec8d7b7a56643d9aadaf40ccd32458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:04:06.330372   99116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/proxy-client.key ...
	I1226 22:04:06.330389   99116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/proxy-client.key: {Name:mk77eca74e6942c8cbb30bb39e9bbd7b42a05b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:04:06.330483   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1226 22:04:06.330504   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1226 22:04:06.330514   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1226 22:04:06.330526   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1226 22:04:06.330535   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 22:04:06.330545   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1226 22:04:06.330557   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 22:04:06.330568   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 22:04:06.330614   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976.pem (1338 bytes)
	W1226 22:04:06.330649   99116 certs.go:433] ignoring /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976_empty.pem, impossibly tiny 0 bytes
	I1226 22:04:06.330677   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem (1679 bytes)
	I1226 22:04:06.330698   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem (1082 bytes)
	I1226 22:04:06.330718   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem (1123 bytes)
	I1226 22:04:06.330736   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem (1679 bytes)
	I1226 22:04:06.330777   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem (1708 bytes)
	I1226 22:04:06.330818   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:04:06.330843   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976.pem -> /usr/share/ca-certificates/13976.pem
	I1226 22:04:06.330856   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> /usr/share/ca-certificates/139762.pem
	I1226 22:04:06.331393   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 22:04:06.352158   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1226 22:04:06.372000   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 22:04:06.392202   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1226 22:04:06.412676   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 22:04:06.433081   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 22:04:06.453285   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 22:04:06.474725   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 22:04:06.494228   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 22:04:06.513999   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976.pem --> /usr/share/ca-certificates/13976.pem (1338 bytes)
	I1226 22:04:06.533641   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem --> /usr/share/ca-certificates/139762.pem (1708 bytes)
	I1226 22:04:06.553372   99116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 22:04:06.567871   99116 ssh_runner.go:195] Run: openssl version
	I1226 22:04:06.572582   99116 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1226 22:04:06.572798   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13976.pem && ln -fs /usr/share/ca-certificates/13976.pem /etc/ssl/certs/13976.pem"
	I1226 22:04:06.580658   99116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13976.pem
	I1226 22:04:06.583614   99116 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 26 21:51 /usr/share/ca-certificates/13976.pem
	I1226 22:04:06.583635   99116 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 21:51 /usr/share/ca-certificates/13976.pem
	I1226 22:04:06.583669   99116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13976.pem
	I1226 22:04:06.589233   99116 command_runner.go:130] > 51391683
	I1226 22:04:06.589462   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13976.pem /etc/ssl/certs/51391683.0"
	I1226 22:04:06.597220   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139762.pem && ln -fs /usr/share/ca-certificates/139762.pem /etc/ssl/certs/139762.pem"
	I1226 22:04:06.604938   99116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139762.pem
	I1226 22:04:06.607694   99116 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 26 21:51 /usr/share/ca-certificates/139762.pem
	I1226 22:04:06.607727   99116 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 21:51 /usr/share/ca-certificates/139762.pem
	I1226 22:04:06.607755   99116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139762.pem
	I1226 22:04:06.613299   99116 command_runner.go:130] > 3ec20f2e
	I1226 22:04:06.613505   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139762.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 22:04:06.621203   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 22:04:06.628793   99116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:04:06.631612   99116 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:04:06.631630   99116 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:04:06.631663   99116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:04:06.637415   99116 command_runner.go:130] > b5213941
	I1226 22:04:06.637681   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 22:04:06.645305   99116 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 22:04:06.647977   99116 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:04:06.648032   99116 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:04:06.648088   99116 kubeadm.go:404] StartCluster: {Name:multinode-266826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-266826 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:04:06.648158   99116 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1226 22:04:06.648210   99116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 22:04:06.679399   99116 cri.go:89] found id: ""
	I1226 22:04:06.679459   99116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 22:04:06.687366   99116 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1226 22:04:06.687391   99116 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1226 22:04:06.687400   99116 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1226 22:04:06.687464   99116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 22:04:06.695107   99116 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1226 22:04:06.695149   99116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 22:04:06.702447   99116 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1226 22:04:06.702466   99116 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1226 22:04:06.702473   99116 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1226 22:04:06.702482   99116 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 22:04:06.702509   99116 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 22:04:06.702539   99116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1226 22:04:06.778972   99116 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1226 22:04:06.779005   99116 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1226 22:04:06.840509   99116 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 22:04:06.840531   99116 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 22:04:16.065014   99116 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1226 22:04:16.065064   99116 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1226 22:04:16.065145   99116 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 22:04:16.065161   99116 command_runner.go:130] > [preflight] Running pre-flight checks
	I1226 22:04:16.065256   99116 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1226 22:04:16.065274   99116 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1226 22:04:16.065334   99116 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1226 22:04:16.065353   99116 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1226 22:04:16.065469   99116 kubeadm.go:322] OS: Linux
	I1226 22:04:16.065496   99116 command_runner.go:130] > OS: Linux
	I1226 22:04:16.065589   99116 kubeadm.go:322] CGROUPS_CPU: enabled
	I1226 22:04:16.065604   99116 command_runner.go:130] > CGROUPS_CPU: enabled
	I1226 22:04:16.065682   99116 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1226 22:04:16.065701   99116 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1226 22:04:16.065759   99116 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1226 22:04:16.065771   99116 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1226 22:04:16.065848   99116 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1226 22:04:16.065867   99116 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1226 22:04:16.065918   99116 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1226 22:04:16.065933   99116 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1226 22:04:16.065985   99116 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1226 22:04:16.065996   99116 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1226 22:04:16.066058   99116 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1226 22:04:16.066070   99116 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1226 22:04:16.066127   99116 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1226 22:04:16.066138   99116 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1226 22:04:16.066198   99116 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1226 22:04:16.066211   99116 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1226 22:04:16.066298   99116 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 22:04:16.066309   99116 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 22:04:16.066417   99116 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 22:04:16.066432   99116 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 22:04:16.066552   99116 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 22:04:16.066562   99116 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 22:04:16.066637   99116 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 22:04:16.068873   99116 out.go:204]   - Generating certificates and keys ...
	I1226 22:04:16.066778   99116 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 22:04:16.068965   99116 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 22:04:16.068986   99116 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1226 22:04:16.069061   99116 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 22:04:16.069072   99116 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1226 22:04:16.069155   99116 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 22:04:16.069166   99116 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 22:04:16.069238   99116 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 22:04:16.069249   99116 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1226 22:04:16.069326   99116 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 22:04:16.069336   99116 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1226 22:04:16.069400   99116 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 22:04:16.069412   99116 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1226 22:04:16.069488   99116 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 22:04:16.069499   99116 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1226 22:04:16.069654   99116 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-266826] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1226 22:04:16.069669   99116 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-266826] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1226 22:04:16.069736   99116 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 22:04:16.069747   99116 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1226 22:04:16.069899   99116 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-266826] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1226 22:04:16.069910   99116 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-266826] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1226 22:04:16.069989   99116 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 22:04:16.070000   99116 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 22:04:16.070068   99116 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 22:04:16.070079   99116 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 22:04:16.070136   99116 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 22:04:16.070145   99116 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1226 22:04:16.070215   99116 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 22:04:16.070225   99116 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 22:04:16.070286   99116 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 22:04:16.070296   99116 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 22:04:16.070354   99116 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 22:04:16.070365   99116 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 22:04:16.070441   99116 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 22:04:16.070452   99116 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 22:04:16.070521   99116 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 22:04:16.070533   99116 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 22:04:16.070598   99116 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 22:04:16.070608   99116 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 22:04:16.070682   99116 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 22:04:16.072152   99116 out.go:204]   - Booting up control plane ...
	I1226 22:04:16.070775   99116 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 22:04:16.072276   99116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 22:04:16.072290   99116 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 22:04:16.072392   99116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 22:04:16.072412   99116 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 22:04:16.072521   99116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 22:04:16.072554   99116 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 22:04:16.072700   99116 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 22:04:16.072716   99116 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 22:04:16.072782   99116 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 22:04:16.072789   99116 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 22:04:16.072825   99116 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 22:04:16.072831   99116 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1226 22:04:16.072990   99116 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 22:04:16.073003   99116 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 22:04:16.073096   99116 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002151 seconds
	I1226 22:04:16.073106   99116 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002151 seconds
	I1226 22:04:16.073242   99116 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 22:04:16.073254   99116 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 22:04:16.073419   99116 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 22:04:16.073437   99116 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 22:04:16.073498   99116 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1226 22:04:16.073509   99116 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1226 22:04:16.073714   99116 kubeadm.go:322] [mark-control-plane] Marking the node multinode-266826 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1226 22:04:16.073734   99116 command_runner.go:130] > [mark-control-plane] Marking the node multinode-266826 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1226 22:04:16.073805   99116 kubeadm.go:322] [bootstrap-token] Using token: 12ubwl.5ddyabp330j7tk8u
	I1226 22:04:16.075217   99116 out.go:204]   - Configuring RBAC rules ...
	I1226 22:04:16.073922   99116 command_runner.go:130] > [bootstrap-token] Using token: 12ubwl.5ddyabp330j7tk8u
	I1226 22:04:16.075310   99116 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 22:04:16.075318   99116 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 22:04:16.075399   99116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 22:04:16.075415   99116 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 22:04:16.075536   99116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 22:04:16.075543   99116 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 22:04:16.075646   99116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 22:04:16.075653   99116 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 22:04:16.075751   99116 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 22:04:16.075759   99116 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 22:04:16.075835   99116 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 22:04:16.075843   99116 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 22:04:16.075937   99116 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 22:04:16.075944   99116 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 22:04:16.075978   99116 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1226 22:04:16.075984   99116 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1226 22:04:16.076055   99116 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1226 22:04:16.076084   99116 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1226 22:04:16.076099   99116 kubeadm.go:322] 
	I1226 22:04:16.076179   99116 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1226 22:04:16.076188   99116 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1226 22:04:16.076191   99116 kubeadm.go:322] 
	I1226 22:04:16.076258   99116 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1226 22:04:16.076264   99116 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1226 22:04:16.076267   99116 kubeadm.go:322] 
	I1226 22:04:16.076286   99116 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1226 22:04:16.076293   99116 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1226 22:04:16.076342   99116 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 22:04:16.076349   99116 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 22:04:16.076386   99116 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 22:04:16.076392   99116 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 22:04:16.076398   99116 kubeadm.go:322] 
	I1226 22:04:16.076457   99116 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1226 22:04:16.076466   99116 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1226 22:04:16.076469   99116 kubeadm.go:322] 
	I1226 22:04:16.076509   99116 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1226 22:04:16.076515   99116 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1226 22:04:16.076519   99116 kubeadm.go:322] 
	I1226 22:04:16.076569   99116 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1226 22:04:16.076576   99116 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1226 22:04:16.076641   99116 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 22:04:16.076648   99116 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 22:04:16.076704   99116 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 22:04:16.076710   99116 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 22:04:16.076714   99116 kubeadm.go:322] 
	I1226 22:04:16.076793   99116 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1226 22:04:16.076804   99116 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1226 22:04:16.076880   99116 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1226 22:04:16.076886   99116 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1226 22:04:16.076890   99116 kubeadm.go:322] 
	I1226 22:04:16.076958   99116 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 12ubwl.5ddyabp330j7tk8u \
	I1226 22:04:16.076964   99116 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 12ubwl.5ddyabp330j7tk8u \
	I1226 22:04:16.077050   99116 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc \
	I1226 22:04:16.077056   99116 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc \
	I1226 22:04:16.077073   99116 kubeadm.go:322] 	--control-plane 
	I1226 22:04:16.077079   99116 command_runner.go:130] > 	--control-plane 
	I1226 22:04:16.077082   99116 kubeadm.go:322] 
	I1226 22:04:16.077176   99116 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1226 22:04:16.077183   99116 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1226 22:04:16.077187   99116 kubeadm.go:322] 
	I1226 22:04:16.077253   99116 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 12ubwl.5ddyabp330j7tk8u \
	I1226 22:04:16.077259   99116 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 12ubwl.5ddyabp330j7tk8u \
	I1226 22:04:16.077343   99116 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc 
	I1226 22:04:16.077349   99116 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc 
	I1226 22:04:16.077374   99116 cni.go:84] Creating CNI manager for ""
	I1226 22:04:16.077382   99116 cni.go:136] 1 nodes found, recommending kindnet
	I1226 22:04:16.078906   99116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 22:04:16.080186   99116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 22:04:16.083728   99116 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1226 22:04:16.083745   99116 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I1226 22:04:16.083751   99116 command_runner.go:130] > Device: 33h/51d	Inode: 573812      Links: 1
	I1226 22:04:16.083758   99116 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:04:16.083779   99116 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I1226 22:04:16.083794   99116 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I1226 22:04:16.083799   99116 command_runner.go:130] > Change: 2023-12-26 21:44:56.780245364 +0000
	I1226 22:04:16.083804   99116 command_runner.go:130] >  Birth: 2023-12-26 21:44:56.756242925 +0000
	I1226 22:04:16.083847   99116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 22:04:16.083860   99116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 22:04:16.099798   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 22:04:16.717120   99116 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1226 22:04:16.723782   99116 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1226 22:04:16.729869   99116 command_runner.go:130] > serviceaccount/kindnet created
	I1226 22:04:16.738531   99116 command_runner.go:130] > daemonset.apps/kindnet created
	I1226 22:04:16.742590   99116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 22:04:16.742675   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:16.742727   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=multinode-266826 minikube.k8s.io/updated_at=2023_12_26T22_04_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:16.806625   99116 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1226 22:04:16.810828   99116 command_runner.go:130] > -16
	I1226 22:04:16.810873   99116 ops.go:34] apiserver oom_adj: -16
	I1226 22:04:16.810930   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:16.816511   99116 command_runner.go:130] > node/multinode-266826 labeled
	I1226 22:04:16.886235   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:17.311859   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:17.373848   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:17.811440   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:17.869131   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:18.311199   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:18.369686   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:18.811942   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:18.872360   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:19.311319   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:19.370788   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:19.810933   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:19.871801   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:20.311341   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:20.372309   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:20.811965   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:20.875587   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:21.311116   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:21.372397   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:21.811038   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:21.869384   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:22.311260   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:22.369836   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:22.811830   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:22.870407   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:23.311441   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:23.370759   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:23.811897   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:23.874700   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:24.311217   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:24.373065   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:24.811706   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:24.880403   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:25.311840   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:25.371074   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:25.810989   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:25.875915   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:26.311528   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:26.371602   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:26.811948   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:26.875765   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:27.311311   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:27.372922   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:27.811845   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:27.871008   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:28.311865   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:28.375955   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:28.811568   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:28.875632   99116 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:04:29.311862   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:29.376812   99116 command_runner.go:130] > NAME      SECRETS   AGE
	I1226 22:04:29.376832   99116 command_runner.go:130] > default   0         1s
	I1226 22:04:29.379240   99116 kubeadm.go:1088] duration metric: took 12.63663646s to wait for elevateKubeSystemPrivileges.
	I1226 22:04:29.379276   99116 kubeadm.go:406] StartCluster complete in 22.731199031s
	I1226 22:04:29.379292   99116 settings.go:142] acquiring lock: {Name:mk12d34f71cd28d3e5987ed147ca378c18cddf69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:04:29.379355   99116 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 22:04:29.380000   99116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-7214/kubeconfig: {Name:mkba7ef3601947363f4aefe62b6956e6c044a4a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:04:29.380210   99116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 22:04:29.380236   99116 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1226 22:04:29.380334   99116 addons.go:69] Setting storage-provisioner=true in profile "multinode-266826"
	I1226 22:04:29.380377   99116 addons.go:237] Setting addon storage-provisioner=true in "multinode-266826"
	I1226 22:04:29.380448   99116 host.go:66] Checking if "multinode-266826" exists ...
	I1226 22:04:29.380374   99116 addons.go:69] Setting default-storageclass=true in profile "multinode-266826"
	I1226 22:04:29.380497   99116 config.go:182] Loaded profile config "multinode-266826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:04:29.380515   99116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-266826"
	I1226 22:04:29.380545   99116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 22:04:29.380808   99116 cli_runner.go:164] Run: docker container inspect multinode-266826 --format={{.State.Status}}
	I1226 22:04:29.380930   99116 cli_runner.go:164] Run: docker container inspect multinode-266826 --format={{.State.Status}}
	I1226 22:04:29.380869   99116 kapi.go:59] client config for multinode-266826: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.key", CAFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:04:29.381569   99116 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 22:04:29.381805   99116 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:04:29.381819   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:29.381826   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:29.381832   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:29.391391   99116 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1226 22:04:29.391418   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:29.391428   99116 round_trippers.go:580]     Content-Length: 291
	I1226 22:04:29.391436   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:29 GMT
	I1226 22:04:29.391445   99116 round_trippers.go:580]     Audit-Id: 86c41dbc-6123-4aa9-9c8b-874c154c66cf
	I1226 22:04:29.391450   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:29.391458   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:29.391463   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:29.391470   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:29.391502   99116 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7320c04a-855a-498b-b992-233853eb9cc8","resourceVersion":"258","creationTimestamp":"2023-12-26T22:04:15Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1226 22:04:29.391853   99116 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7320c04a-855a-498b-b992-233853eb9cc8","resourceVersion":"258","creationTimestamp":"2023-12-26T22:04:15Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1226 22:04:29.391897   99116 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:04:29.391905   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:29.391912   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:29.391918   99116 round_trippers.go:473]     Content-Type: application/json
	I1226 22:04:29.391926   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:29.396755   99116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:04:29.396776   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:29.396786   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:29.396801   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:29.396810   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:29.396818   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:29.396828   99116 round_trippers.go:580]     Content-Length: 291
	I1226 22:04:29.396836   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:29 GMT
	I1226 22:04:29.396845   99116 round_trippers.go:580]     Audit-Id: b563bae0-773f-4faa-94d8-0f58b9790dee
	I1226 22:04:29.396870   99116 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7320c04a-855a-498b-b992-233853eb9cc8","resourceVersion":"342","creationTimestamp":"2023-12-26T22:04:15Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1226 22:04:29.400512   99116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 22:04:29.400703   99116 kapi.go:59] client config for multinode-266826: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.key", CAFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:04:29.400903   99116 addons.go:237] Setting addon default-storageclass=true in "multinode-266826"
	I1226 22:04:29.400934   99116 host.go:66] Checking if "multinode-266826" exists ...
	I1226 22:04:29.401293   99116 cli_runner.go:164] Run: docker container inspect multinode-266826 --format={{.State.Status}}
	I1226 22:04:29.404595   99116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:04:29.406032   99116 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:04:29.406054   99116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 22:04:29.406106   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:29.419319   99116 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 22:04:29.419342   99116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 22:04:29.419386   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:29.423648   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa Username:docker}
	I1226 22:04:29.438802   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa Username:docker}
	I1226 22:04:29.460361   99116 command_runner.go:130] > apiVersion: v1
	I1226 22:04:29.460384   99116 command_runner.go:130] > data:
	I1226 22:04:29.460391   99116 command_runner.go:130] >   Corefile: |
	I1226 22:04:29.460396   99116 command_runner.go:130] >     .:53 {
	I1226 22:04:29.460402   99116 command_runner.go:130] >         errors
	I1226 22:04:29.460409   99116 command_runner.go:130] >         health {
	I1226 22:04:29.460417   99116 command_runner.go:130] >            lameduck 5s
	I1226 22:04:29.460425   99116 command_runner.go:130] >         }
	I1226 22:04:29.460432   99116 command_runner.go:130] >         ready
	I1226 22:04:29.460445   99116 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1226 22:04:29.460454   99116 command_runner.go:130] >            pods insecure
	I1226 22:04:29.460462   99116 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1226 22:04:29.460473   99116 command_runner.go:130] >            ttl 30
	I1226 22:04:29.460480   99116 command_runner.go:130] >         }
	I1226 22:04:29.460486   99116 command_runner.go:130] >         prometheus :9153
	I1226 22:04:29.460498   99116 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1226 22:04:29.460506   99116 command_runner.go:130] >            max_concurrent 1000
	I1226 22:04:29.460512   99116 command_runner.go:130] >         }
	I1226 22:04:29.460519   99116 command_runner.go:130] >         cache 30
	I1226 22:04:29.460527   99116 command_runner.go:130] >         loop
	I1226 22:04:29.460534   99116 command_runner.go:130] >         reload
	I1226 22:04:29.460544   99116 command_runner.go:130] >         loadbalance
	I1226 22:04:29.460550   99116 command_runner.go:130] >     }
	I1226 22:04:29.460559   99116 command_runner.go:130] > kind: ConfigMap
	I1226 22:04:29.460566   99116 command_runner.go:130] > metadata:
	I1226 22:04:29.460577   99116 command_runner.go:130] >   creationTimestamp: "2023-12-26T22:04:15Z"
	I1226 22:04:29.460586   99116 command_runner.go:130] >   name: coredns
	I1226 22:04:29.460593   99116 command_runner.go:130] >   namespace: kube-system
	I1226 22:04:29.460599   99116 command_runner.go:130] >   resourceVersion: "254"
	I1226 22:04:29.460610   99116 command_runner.go:130] >   uid: f7579c03-7a92-4f90-b017-bbcedf921bbf
	I1226 22:04:29.463753   99116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1226 22:04:29.575815   99116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:04:29.582125   99116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 22:04:29.882878   99116 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:04:29.882903   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:29.882921   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:29.882930   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:29.962121   99116 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I1226 22:04:29.962154   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:29.962164   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:29.962172   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:29.962181   99116 round_trippers.go:580]     Content-Length: 291
	I1226 22:04:29.962189   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:29 GMT
	I1226 22:04:29.962197   99116 round_trippers.go:580]     Audit-Id: d1c1c9b7-82b8-4c79-b5d4-2ae37cb3b1d7
	I1226 22:04:29.962204   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:29.962212   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:29.962251   99116 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7320c04a-855a-498b-b992-233853eb9cc8","resourceVersion":"348","creationTimestamp":"2023-12-26T22:04:15Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1226 22:04:29.962384   99116 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-266826" context rescaled to 1 replicas
	I1226 22:04:29.962424   99116 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 22:04:29.964108   99116 out.go:177] * Verifying Kubernetes components...
	I1226 22:04:29.965508   99116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:04:30.192748   99116 command_runner.go:130] > configmap/coredns replaced
	I1226 22:04:30.196945   99116 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1226 22:04:30.398773   99116 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1226 22:04:30.404488   99116 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1226 22:04:30.410415   99116 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1226 22:04:30.416385   99116 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1226 22:04:30.422179   99116 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1226 22:04:30.430146   99116 command_runner.go:130] > pod/storage-provisioner created
	I1226 22:04:30.435249   99116 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1226 22:04:30.435379   99116 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1226 22:04:30.435391   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:30.435413   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:30.435424   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:30.435690   99116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 22:04:30.436022   99116 kapi.go:59] client config for multinode-266826: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.key", CAFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:04:30.436333   99116 node_ready.go:35] waiting up to 6m0s for node "multinode-266826" to be "Ready" ...
	I1226 22:04:30.436434   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:30.436444   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:30.436456   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:30.436466   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:30.437449   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:30.437462   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:30.437469   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:30.437474   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:30.437480   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:30.437485   99116 round_trippers.go:580]     Content-Length: 1273
	I1226 22:04:30.437493   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:30 GMT
	I1226 22:04:30.437500   99116 round_trippers.go:580]     Audit-Id: 294764d4-bb11-4ae9-ad77-e87c83e42603
	I1226 22:04:30.437505   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:30.437526   99116 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"400"},"items":[{"metadata":{"name":"standard","uid":"0e2f1e14-f3e7-4c74-95ac-22d942441274","resourceVersion":"392","creationTimestamp":"2023-12-26T22:04:30Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-26T22:04:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1226 22:04:30.437902   99116 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0e2f1e14-f3e7-4c74-95ac-22d942441274","resourceVersion":"392","creationTimestamp":"2023-12-26T22:04:30Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-26T22:04:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1226 22:04:30.437952   99116 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1226 22:04:30.437967   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:30.437977   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:30.437989   99116 round_trippers.go:473]     Content-Type: application/json
	I1226 22:04:30.438001   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:30.438137   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:30.438160   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:30.438170   99116 round_trippers.go:580]     Audit-Id: 421712d4-41db-4c2a-919a-e2f48159331d
	I1226 22:04:30.438179   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:30.438187   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:30.438199   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:30.438211   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:30.438223   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:30 GMT
	I1226 22:04:30.438414   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"349","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1226 22:04:30.440242   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:30.440266   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:30.440277   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:30.440289   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:30.440302   99116 round_trippers.go:580]     Content-Length: 1220
	I1226 22:04:30.440313   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:30 GMT
	I1226 22:04:30.440325   99116 round_trippers.go:580]     Audit-Id: 35ad443f-2e4e-44a7-8003-9f4bf89ab4d6
	I1226 22:04:30.440334   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:30.440343   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:30.440393   99116 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0e2f1e14-f3e7-4c74-95ac-22d942441274","resourceVersion":"392","creationTimestamp":"2023-12-26T22:04:30Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-26T22:04:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1226 22:04:30.442391   99116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1226 22:04:30.443680   99116 addons.go:508] enable addons completed in 1.063447262s: enabled=[storage-provisioner default-storageclass]
	I1226 22:04:30.936532   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:30.936550   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:30.936558   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:30.936564   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:30.939003   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:30.939022   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:30.939029   99116 round_trippers.go:580]     Audit-Id: a446bc31-125e-4342-8e8f-d99e7e81a492
	I1226 22:04:30.939035   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:30.939040   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:30.939045   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:30.939051   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:30.939056   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:30 GMT
	I1226 22:04:30.939176   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"349","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1226 22:04:31.436692   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:31.436716   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:31.436724   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:31.436730   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:31.438917   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:31.438941   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:31.438947   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:31 GMT
	I1226 22:04:31.438952   99116 round_trippers.go:580]     Audit-Id: 90f0dfcd-0197-4e93-8176-1afb2d1819f3
	I1226 22:04:31.438959   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:31.438967   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:31.438984   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:31.438996   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:31.439129   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:04:31.439519   99116 node_ready.go:49] node "multinode-266826" has status "Ready":"True"
	I1226 22:04:31.439539   99116 node_ready.go:38] duration metric: took 1.00317263s waiting for node "multinode-266826" to be "Ready" ...
	I1226 22:04:31.439552   99116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:04:31.439615   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:04:31.439623   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:31.439630   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:31.439636   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:31.442292   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:31.442307   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:31.442317   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:31 GMT
	I1226 22:04:31.442322   99116 round_trippers.go:580]     Audit-Id: b901dc2b-4484-40de-b503-1bc03e67ed47
	I1226 22:04:31.442328   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:31.442335   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:31.442343   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:31.442350   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:31.442875   99116 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4p457","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1082508c-2f95-46f0-8ec7-f530272863d8","resourceVersion":"417","creationTimestamp":"2023-12-26T22:04:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ced70fb3-5964-411a-9a71-77cadfafa3cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ced70fb3-5964-411a-9a71-77cadfafa3cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54150 chars]
	I1226 22:04:31.445739   99116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4p457" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:31.445800   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4p457
	I1226 22:04:31.445809   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:31.445816   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:31.445822   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:31.447507   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:31.447528   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:31.447539   99116 round_trippers.go:580]     Audit-Id: 2e7a46b6-649f-41ac-b499-6acd26432750
	I1226 22:04:31.447548   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:31.447557   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:31.447566   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:31.447577   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:31.447589   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:31 GMT
	I1226 22:04:31.447699   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4p457","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1082508c-2f95-46f0-8ec7-f530272863d8","resourceVersion":"417","creationTimestamp":"2023-12-26T22:04:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ced70fb3-5964-411a-9a71-77cadfafa3cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ced70fb3-5964-411a-9a71-77cadfafa3cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1226 22:04:31.448091   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:31.448103   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:31.448109   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:31.448115   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:31.449674   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:31.449693   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:31.449701   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:31.449713   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:31.449724   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:31.449734   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:31.449745   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:31 GMT
	I1226 22:04:31.449756   99116 round_trippers.go:580]     Audit-Id: 63894728-fe2b-4ab2-bf90-e322740dcce2
	I1226 22:04:31.449884   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:04:31.946640   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4p457
	I1226 22:04:31.946682   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:31.946694   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:31.946704   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:31.949001   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:31.949023   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:31.949039   99116 round_trippers.go:580]     Audit-Id: 642bad45-e33e-489b-b213-e168269a6243
	I1226 22:04:31.949047   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:31.949055   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:31.949063   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:31.949072   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:31.949081   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:31 GMT
	I1226 22:04:31.949192   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4p457","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1082508c-2f95-46f0-8ec7-f530272863d8","resourceVersion":"417","creationTimestamp":"2023-12-26T22:04:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ced70fb3-5964-411a-9a71-77cadfafa3cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ced70fb3-5964-411a-9a71-77cadfafa3cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1226 22:04:31.949610   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:31.949622   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:31.949629   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:31.949634   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:31.951481   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:31.951501   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:31.951508   99116 round_trippers.go:580]     Audit-Id: 66ccf32a-d088-4015-adf6-1ca077462868
	I1226 22:04:31.951513   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:31.951518   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:31.951530   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:31.951539   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:31.951544   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:31 GMT
	I1226 22:04:31.951720   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:04:32.446382   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4p457
	I1226 22:04:32.446405   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.446416   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.446422   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.448734   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:32.448752   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.448759   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.448765   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.448771   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.448776   99116 round_trippers.go:580]     Audit-Id: 37b55431-aa99-470e-8601-3ec501a476bc
	I1226 22:04:32.448781   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.448786   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.448859   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4p457","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1082508c-2f95-46f0-8ec7-f530272863d8","resourceVersion":"427","creationTimestamp":"2023-12-26T22:04:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ced70fb3-5964-411a-9a71-77cadfafa3cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ced70fb3-5964-411a-9a71-77cadfafa3cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1226 22:04:32.449512   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:32.449538   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.449550   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.449563   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.451384   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:32.451401   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.451407   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.451413   99116 round_trippers.go:580]     Audit-Id: 5df6deac-1c54-4bfb-bb65-f9fd003ccddc
	I1226 22:04:32.451419   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.451426   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.451433   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.451441   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.451617   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:04:32.451979   99116 pod_ready.go:92] pod "coredns-5dd5756b68-4p457" in "kube-system" namespace has status "Ready":"True"
	I1226 22:04:32.451999   99116 pod_ready.go:81] duration metric: took 1.006237831s waiting for pod "coredns-5dd5756b68-4p457" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:32.452010   99116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:32.452077   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-266826
	I1226 22:04:32.452088   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.452099   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.452110   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.453957   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:32.453978   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.453986   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.453992   99116 round_trippers.go:580]     Audit-Id: 1520dbc5-c2de-4b16-8b55-532b00c90154
	I1226 22:04:32.453997   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.454002   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.454006   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.454012   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.454118   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-266826","namespace":"kube-system","uid":"292e9393-2f69-4242-a157-b140c190d193","resourceVersion":"328","creationTimestamp":"2023-12-26T22:04:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2fb45fcc87f242a4c992596764e6dc2d","kubernetes.io/config.mirror":"2fb45fcc87f242a4c992596764e6dc2d","kubernetes.io/config.seen":"2023-12-26T22:04:15.904879811Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1226 22:04:32.454488   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:32.454502   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.454509   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.454515   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.456125   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:32.456145   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.456154   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.456163   99116 round_trippers.go:580]     Audit-Id: fb59f871-4a8b-4aca-827a-910026a23081
	I1226 22:04:32.456172   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.456185   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.456194   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.456203   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.456330   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:04:32.456595   99116 pod_ready.go:92] pod "etcd-multinode-266826" in "kube-system" namespace has status "Ready":"True"
	I1226 22:04:32.456614   99116 pod_ready.go:81] duration metric: took 4.596346ms waiting for pod "etcd-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:32.456629   99116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:32.456724   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-266826
	I1226 22:04:32.456738   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.456749   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.456762   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.458433   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:32.458457   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.458467   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.458480   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.458492   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.458503   99116 round_trippers.go:580]     Audit-Id: b37a6fcc-66bd-417f-8b28-9302ad1ade35
	I1226 22:04:32.458514   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.458537   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.458637   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-266826","namespace":"kube-system","uid":"60235785-d057-4077-9fc2-eacc2fe9a891","resourceVersion":"308","creationTimestamp":"2023-12-26T22:04:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"921fd19a66a4b2c6fdcfeaed7f1b0d77","kubernetes.io/config.mirror":"921fd19a66a4b2c6fdcfeaed7f1b0d77","kubernetes.io/config.seen":"2023-12-26T22:04:15.904875073Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1226 22:04:32.459072   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:32.459087   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.459094   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.459100   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.460532   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:32.460546   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.460552   99116 round_trippers.go:580]     Audit-Id: 4f0aad6e-99ac-4391-a343-295582813230
	I1226 22:04:32.460558   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.460563   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.460568   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.460574   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.460582   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.460712   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:04:32.460974   99116 pod_ready.go:92] pod "kube-apiserver-multinode-266826" in "kube-system" namespace has status "Ready":"True"
	I1226 22:04:32.460987   99116 pod_ready.go:81] duration metric: took 4.346856ms waiting for pod "kube-apiserver-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:32.460994   99116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:32.461042   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-266826
	I1226 22:04:32.461050   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.461056   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.461063   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.464312   99116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:04:32.464337   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.464348   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.464362   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.464375   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.464388   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.464396   99116 round_trippers.go:580]     Audit-Id: 35f8bddc-30f4-4b90-b544-586f64381a46
	I1226 22:04:32.464402   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.464555   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-266826","namespace":"kube-system","uid":"fce43ca4-b581-4e63-9d91-407cfc3af34a","resourceVersion":"315","creationTimestamp":"2023-12-26T22:04:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b99dab39183e7b4071478b37393b1479","kubernetes.io/config.mirror":"b99dab39183e7b4071478b37393b1479","kubernetes.io/config.seen":"2023-12-26T22:04:15.904878245Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1226 22:04:32.465019   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:32.465036   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.465047   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.465058   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.466703   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:32.466719   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.466728   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.466737   99116 round_trippers.go:580]     Audit-Id: 32f06804-6f90-4163-8551-e71dce76c87d
	I1226 22:04:32.466748   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.466757   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.466767   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.466779   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.466892   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:04:32.467163   99116 pod_ready.go:92] pod "kube-controller-manager-multinode-266826" in "kube-system" namespace has status "Ready":"True"
	I1226 22:04:32.467177   99116 pod_ready.go:81] duration metric: took 6.176678ms waiting for pod "kube-controller-manager-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:32.467188   99116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-frq75" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:32.467231   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-frq75
	I1226 22:04:32.467241   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.467251   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.467261   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.468931   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:32.468945   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.468951   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.468956   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.468963   99116 round_trippers.go:580]     Audit-Id: ba73f6f8-82e2-49c6-845a-f623a540ca76
	I1226 22:04:32.468971   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.468984   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.468992   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.469134   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-frq75","generateName":"kube-proxy-","namespace":"kube-system","uid":"e47e4ce1-94e6-4f54-8ce4-717af6ef6e4b","resourceVersion":"408","creationTimestamp":"2023-12-26T22:04:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5e3e2447-719f-4fc4-8238-6f824bc5e757","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e3e2447-719f-4fc4-8238-6f824bc5e757\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1226 22:04:32.636692   99116 request.go:629] Waited for 167.238796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:32.636743   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:32.636748   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.636756   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.636762   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.638864   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:32.638886   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.638896   99116 round_trippers.go:580]     Audit-Id: ecb8bbcf-df7b-4946-a5f3-c4509e858478
	I1226 22:04:32.638904   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.638913   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.638921   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.638936   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.638945   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.639032   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:04:32.639345   99116 pod_ready.go:92] pod "kube-proxy-frq75" in "kube-system" namespace has status "Ready":"True"
	I1226 22:04:32.639364   99116 pod_ready.go:81] duration metric: took 172.169631ms waiting for pod "kube-proxy-frq75" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:32.639373   99116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:32.836710   99116 request.go:629] Waited for 197.282618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266826
	I1226 22:04:32.836767   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266826
	I1226 22:04:32.836772   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:32.836794   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:32.836804   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:32.839017   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:32.839034   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:32.839048   99116 round_trippers.go:580]     Audit-Id: 40b1685d-b6b3-4f04-b646-fc4a20111de4
	I1226 22:04:32.839053   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:32.839062   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:32.839067   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:32.839072   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:32.839082   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:32 GMT
	I1226 22:04:32.839238   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-266826","namespace":"kube-system","uid":"af177936-2de6-4220-8dd7-76e070b19ea2","resourceVersion":"289","creationTimestamp":"2023-12-26T22:04:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3f9bfd187d50cafa9c506d2c393e2576","kubernetes.io/config.mirror":"3f9bfd187d50cafa9c506d2c393e2576","kubernetes.io/config.seen":"2023-12-26T22:04:09.928231295Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1226 22:04:33.036897   99116 request.go:629] Waited for 197.296903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:33.036953   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:04:33.036958   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:33.036965   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:33.036974   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:33.038964   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:04:33.038986   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:33.038997   99116 round_trippers.go:580]     Audit-Id: 5ffaeb82-1a32-44c0-8aab-a218452466c5
	I1226 22:04:33.039014   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:33.039026   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:33.039035   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:33.039046   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:33.039066   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:33 GMT
	I1226 22:04:33.039162   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:04:33.039442   99116 pod_ready.go:92] pod "kube-scheduler-multinode-266826" in "kube-system" namespace has status "Ready":"True"
	I1226 22:04:33.039456   99116 pod_ready.go:81] duration metric: took 400.078525ms waiting for pod "kube-scheduler-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:04:33.039466   99116 pod_ready.go:38] duration metric: took 1.599899157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:04:33.039481   99116 api_server.go:52] waiting for apiserver process to appear ...
	I1226 22:04:33.039527   99116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:04:33.049041   99116 command_runner.go:130] > 1432
	I1226 22:04:33.049800   99116 api_server.go:72] duration metric: took 3.087347671s to wait for apiserver process to appear ...
	I1226 22:04:33.049817   99116 api_server.go:88] waiting for apiserver healthz status ...
	I1226 22:04:33.049833   99116 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1226 22:04:33.054518   99116 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1226 22:04:33.054589   99116 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1226 22:04:33.054600   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:33.054613   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:33.054625   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:33.055490   99116 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1226 22:04:33.055505   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:33.055511   99116 round_trippers.go:580]     Audit-Id: f708f712-93bd-44e4-b72d-69532776819a
	I1226 22:04:33.055519   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:33.055524   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:33.055531   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:33.055537   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:33.055544   99116 round_trippers.go:580]     Content-Length: 264
	I1226 22:04:33.055549   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:33 GMT
	I1226 22:04:33.055565   99116 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1226 22:04:33.055623   99116 api_server.go:141] control plane version: v1.28.4
	I1226 22:04:33.055636   99116 api_server.go:131] duration metric: took 5.814386ms to wait for apiserver health ...
	I1226 22:04:33.055644   99116 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 22:04:33.237033   99116 request.go:629] Waited for 181.32233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:04:33.237086   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:04:33.237103   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:33.237122   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:33.237132   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:33.240411   99116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:04:33.240434   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:33.240441   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:33 GMT
	I1226 22:04:33.240447   99116 round_trippers.go:580]     Audit-Id: 9ed467ab-4f4c-4e41-8f7f-0f65392d0847
	I1226 22:04:33.240452   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:33.240457   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:33.240463   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:33.240471   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:33.240913   99116 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4p457","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1082508c-2f95-46f0-8ec7-f530272863d8","resourceVersion":"427","creationTimestamp":"2023-12-26T22:04:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ced70fb3-5964-411a-9a71-77cadfafa3cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ced70fb3-5964-411a-9a71-77cadfafa3cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1226 22:04:33.242574   99116 system_pods.go:59] 8 kube-system pods found
	I1226 22:04:33.242598   99116 system_pods.go:61] "coredns-5dd5756b68-4p457" [1082508c-2f95-46f0-8ec7-f530272863d8] Running
	I1226 22:04:33.242604   99116 system_pods.go:61] "etcd-multinode-266826" [292e9393-2f69-4242-a157-b140c190d193] Running
	I1226 22:04:33.242608   99116 system_pods.go:61] "kindnet-vfmsx" [e6020a6f-beb5-41f6-a52a-732e7052efa7] Running
	I1226 22:04:33.242616   99116 system_pods.go:61] "kube-apiserver-multinode-266826" [60235785-d057-4077-9fc2-eacc2fe9a891] Running
	I1226 22:04:33.242621   99116 system_pods.go:61] "kube-controller-manager-multinode-266826" [fce43ca4-b581-4e63-9d91-407cfc3af34a] Running
	I1226 22:04:33.242631   99116 system_pods.go:61] "kube-proxy-frq75" [e47e4ce1-94e6-4f54-8ce4-717af6ef6e4b] Running
	I1226 22:04:33.242635   99116 system_pods.go:61] "kube-scheduler-multinode-266826" [af177936-2de6-4220-8dd7-76e070b19ea2] Running
	I1226 22:04:33.242639   99116 system_pods.go:61] "storage-provisioner" [9b493a83-ad24-43ef-a212-44afe94ff921] Running
	I1226 22:04:33.242646   99116 system_pods.go:74] duration metric: took 186.994665ms to wait for pod list to return data ...
	I1226 22:04:33.242673   99116 default_sa.go:34] waiting for default service account to be created ...
	I1226 22:04:33.437100   99116 request.go:629] Waited for 194.327846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1226 22:04:33.437149   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1226 22:04:33.437154   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:33.437161   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:33.437167   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:33.439237   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:33.439255   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:33.439262   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:33.439267   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:33.439272   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:33.439278   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:33.439283   99116 round_trippers.go:580]     Content-Length: 261
	I1226 22:04:33.439288   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:33 GMT
	I1226 22:04:33.439293   99116 round_trippers.go:580]     Audit-Id: b522f4e3-edf8-4165-aed4-57160640f7b3
	I1226 22:04:33.439316   99116 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"334263d1-456a-4959-b878-240c54d55942","resourceVersion":"331","creationTimestamp":"2023-12-26T22:04:28Z"}}]}
	I1226 22:04:33.439503   99116 default_sa.go:45] found service account: "default"
	I1226 22:04:33.439520   99116 default_sa.go:55] duration metric: took 196.841185ms for default service account to be created ...
	I1226 22:04:33.439528   99116 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 22:04:33.636892   99116 request.go:629] Waited for 197.295339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:04:33.636973   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:04:33.636991   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:33.637002   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:33.637015   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:33.640017   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:33.640041   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:33.640049   99116 round_trippers.go:580]     Audit-Id: 8db6a477-9174-4a08-a6e5-5fa17fac39fd
	I1226 22:04:33.640055   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:33.640068   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:33.640078   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:33.640087   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:33.640099   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:33 GMT
	I1226 22:04:33.640537   99116 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4p457","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1082508c-2f95-46f0-8ec7-f530272863d8","resourceVersion":"427","creationTimestamp":"2023-12-26T22:04:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ced70fb3-5964-411a-9a71-77cadfafa3cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ced70fb3-5964-411a-9a71-77cadfafa3cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1226 22:04:33.642679   99116 system_pods.go:86] 8 kube-system pods found
	I1226 22:04:33.642710   99116 system_pods.go:89] "coredns-5dd5756b68-4p457" [1082508c-2f95-46f0-8ec7-f530272863d8] Running
	I1226 22:04:33.642719   99116 system_pods.go:89] "etcd-multinode-266826" [292e9393-2f69-4242-a157-b140c190d193] Running
	I1226 22:04:33.642729   99116 system_pods.go:89] "kindnet-vfmsx" [e6020a6f-beb5-41f6-a52a-732e7052efa7] Running
	I1226 22:04:33.642743   99116 system_pods.go:89] "kube-apiserver-multinode-266826" [60235785-d057-4077-9fc2-eacc2fe9a891] Running
	I1226 22:04:33.642752   99116 system_pods.go:89] "kube-controller-manager-multinode-266826" [fce43ca4-b581-4e63-9d91-407cfc3af34a] Running
	I1226 22:04:33.642759   99116 system_pods.go:89] "kube-proxy-frq75" [e47e4ce1-94e6-4f54-8ce4-717af6ef6e4b] Running
	I1226 22:04:33.642767   99116 system_pods.go:89] "kube-scheduler-multinode-266826" [af177936-2de6-4220-8dd7-76e070b19ea2] Running
	I1226 22:04:33.642778   99116 system_pods.go:89] "storage-provisioner" [9b493a83-ad24-43ef-a212-44afe94ff921] Running
	I1226 22:04:33.642788   99116 system_pods.go:126] duration metric: took 203.254153ms to wait for k8s-apps to be running ...
	I1226 22:04:33.642800   99116 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 22:04:33.642852   99116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:04:33.653068   99116 system_svc.go:56] duration metric: took 10.263742ms WaitForService to wait for kubelet.
	I1226 22:04:33.653088   99116 kubeadm.go:581] duration metric: took 3.690637644s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 22:04:33.653109   99116 node_conditions.go:102] verifying NodePressure condition ...
	I1226 22:04:33.837489   99116 request.go:629] Waited for 184.319523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1226 22:04:33.837580   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1226 22:04:33.837592   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:33.837620   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:33.837637   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:33.839961   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:33.839980   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:33.839986   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:33.839992   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:33.839997   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:33 GMT
	I1226 22:04:33.840002   99116 round_trippers.go:580]     Audit-Id: a5172cf8-0660-4f65-b4f3-bb498f95d372
	I1226 22:04:33.840007   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:33.840011   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:33.840155   99116 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1226 22:04:33.840499   99116 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1226 22:04:33.840537   99116 node_conditions.go:123] node cpu capacity is 8
	I1226 22:04:33.840550   99116 node_conditions.go:105] duration metric: took 187.435654ms to run NodePressure ...
	I1226 22:04:33.840572   99116 start.go:228] waiting for startup goroutines ...
	I1226 22:04:33.840584   99116 start.go:233] waiting for cluster config update ...
	I1226 22:04:33.840599   99116 start.go:242] writing updated cluster config ...
	I1226 22:04:33.842839   99116 out.go:177] 
	I1226 22:04:33.844639   99116 config.go:182] Loaded profile config "multinode-266826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:04:33.844716   99116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/config.json ...
	I1226 22:04:33.846334   99116 out.go:177] * Starting worker node multinode-266826-m02 in cluster multinode-266826
	I1226 22:04:33.847947   99116 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:04:33.849317   99116 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:04:33.850509   99116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:04:33.850533   99116 cache.go:56] Caching tarball of preloaded images
	I1226 22:04:33.850607   99116 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 22:04:33.850625   99116 preload.go:174] Found /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1226 22:04:33.850646   99116 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1226 22:04:33.850746   99116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/config.json ...
	I1226 22:04:33.866251   99116 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 22:04:33.866270   99116 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 22:04:33.866292   99116 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:04:33.866327   99116 start.go:365] acquiring machines lock for multinode-266826-m02: {Name:mkfe2964ede88322edc42ea79463b2007c6cc594 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:04:33.866431   99116 start.go:369] acquired machines lock for "multinode-266826-m02" in 82.025µs
	I1226 22:04:33.866456   99116 start.go:93] Provisioning new machine with config: &{Name:multinode-266826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-266826 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1226 22:04:33.866550   99116 start.go:125] createHost starting for "m02" (driver="docker")
	I1226 22:04:33.869403   99116 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1226 22:04:33.869516   99116 start.go:159] libmachine.API.Create for "multinode-266826" (driver="docker")
	I1226 22:04:33.869545   99116 client.go:168] LocalClient.Create starting
	I1226 22:04:33.869603   99116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem
	I1226 22:04:33.869638   99116 main.go:141] libmachine: Decoding PEM data...
	I1226 22:04:33.869661   99116 main.go:141] libmachine: Parsing certificate...
	I1226 22:04:33.869725   99116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem
	I1226 22:04:33.869753   99116 main.go:141] libmachine: Decoding PEM data...
	I1226 22:04:33.869772   99116 main.go:141] libmachine: Parsing certificate...
	I1226 22:04:33.869977   99116 cli_runner.go:164] Run: docker network inspect multinode-266826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:04:33.884333   99116 network_create.go:77] Found existing network {name:multinode-266826 subnet:0xc002ce82d0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1226 22:04:33.884373   99116 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-266826-m02" container
	I1226 22:04:33.884431   99116 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 22:04:33.898341   99116 cli_runner.go:164] Run: docker volume create multinode-266826-m02 --label name.minikube.sigs.k8s.io=multinode-266826-m02 --label created_by.minikube.sigs.k8s.io=true
	I1226 22:04:33.913112   99116 oci.go:103] Successfully created a docker volume multinode-266826-m02
	I1226 22:04:33.913184   99116 cli_runner.go:164] Run: docker run --rm --name multinode-266826-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-266826-m02 --entrypoint /usr/bin/test -v multinode-266826-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 22:04:34.460476   99116 oci.go:107] Successfully prepared a docker volume multinode-266826-m02
	I1226 22:04:34.460522   99116 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:04:34.460542   99116 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 22:04:34.460598   99116 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-266826-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 22:04:39.477678   99116 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-266826-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.017042142s)
	I1226 22:04:39.477711   99116 kic.go:203] duration metric: took 5.017165 seconds to extract preloaded images to volume
	W1226 22:04:39.477863   99116 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 22:04:39.477985   99116 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 22:04:39.529841   99116 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-266826-m02 --name multinode-266826-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-266826-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-266826-m02 --network multinode-266826 --ip 192.168.58.3 --volume multinode-266826-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 22:04:39.815330   99116 cli_runner.go:164] Run: docker container inspect multinode-266826-m02 --format={{.State.Running}}
	I1226 22:04:39.834018   99116 cli_runner.go:164] Run: docker container inspect multinode-266826-m02 --format={{.State.Status}}
	I1226 22:04:39.851908   99116 cli_runner.go:164] Run: docker exec multinode-266826-m02 stat /var/lib/dpkg/alternatives/iptables
	I1226 22:04:39.922507   99116 oci.go:144] the created container "multinode-266826-m02" has a running status.
	I1226 22:04:39.922548   99116 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826-m02/id_rsa...
	I1226 22:04:40.392321   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1226 22:04:40.392369   99116 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 22:04:40.414003   99116 cli_runner.go:164] Run: docker container inspect multinode-266826-m02 --format={{.State.Status}}
	I1226 22:04:40.431164   99116 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 22:04:40.431183   99116 kic_runner.go:114] Args: [docker exec --privileged multinode-266826-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 22:04:40.488233   99116 cli_runner.go:164] Run: docker container inspect multinode-266826-m02 --format={{.State.Status}}
	I1226 22:04:40.504112   99116 machine.go:88] provisioning docker machine ...
	I1226 22:04:40.504161   99116 ubuntu.go:169] provisioning hostname "multinode-266826-m02"
	I1226 22:04:40.504225   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826-m02
	I1226 22:04:40.521367   99116 main.go:141] libmachine: Using SSH client type: native
	I1226 22:04:40.521687   99116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1226 22:04:40.521700   99116 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-266826-m02 && echo "multinode-266826-m02" | sudo tee /etc/hostname
	I1226 22:04:40.648345   99116 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-266826-m02
	
	I1226 22:04:40.648439   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826-m02
	I1226 22:04:40.664118   99116 main.go:141] libmachine: Using SSH client type: native
	I1226 22:04:40.664428   99116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1226 22:04:40.664446   99116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-266826-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-266826-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-266826-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:04:40.782524   99116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:04:40.782555   99116 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-7214/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-7214/.minikube}
	I1226 22:04:40.782572   99116 ubuntu.go:177] setting up certificates
	I1226 22:04:40.782582   99116 provision.go:83] configureAuth start
	I1226 22:04:40.782638   99116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-266826-m02
	I1226 22:04:40.798409   99116 provision.go:138] copyHostCerts
	I1226 22:04:40.798454   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem
	I1226 22:04:40.798491   99116 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem, removing ...
	I1226 22:04:40.798500   99116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem
	I1226 22:04:40.798579   99116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem (1082 bytes)
	I1226 22:04:40.798718   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem
	I1226 22:04:40.798745   99116 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem, removing ...
	I1226 22:04:40.798755   99116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem
	I1226 22:04:40.798798   99116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem (1123 bytes)
	I1226 22:04:40.798855   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem
	I1226 22:04:40.798878   99116 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem, removing ...
	I1226 22:04:40.798887   99116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem
	I1226 22:04:40.798922   99116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem (1679 bytes)
	I1226 22:04:40.798983   99116 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem org=jenkins.multinode-266826-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-266826-m02]
	I1226 22:04:40.910242   99116 provision.go:172] copyRemoteCerts
	I1226 22:04:40.910298   99116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:04:40.910330   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826-m02
	I1226 22:04:40.926375   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826-m02/id_rsa Username:docker}
	I1226 22:04:41.018519   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1226 22:04:41.018583   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1226 22:04:41.038991   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1226 22:04:41.039070   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 22:04:41.058531   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1226 22:04:41.058580   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:04:41.077948   99116 provision.go:86] duration metric: configureAuth took 295.356414ms
	I1226 22:04:41.077975   99116 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:04:41.078137   99116 config.go:182] Loaded profile config "multinode-266826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:04:41.078222   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826-m02
	I1226 22:04:41.095007   99116 main.go:141] libmachine: Using SSH client type: native
	I1226 22:04:41.095329   99116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1226 22:04:41.095345   99116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:04:41.290865   99116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:04:41.290894   99116 machine.go:91] provisioned docker machine in 786.749061ms
	I1226 22:04:41.290906   99116 client.go:171] LocalClient.Create took 7.421352576s
	I1226 22:04:41.290929   99116 start.go:167] duration metric: libmachine.API.Create for "multinode-266826" took 7.421411589s
	I1226 22:04:41.290939   99116 start.go:300] post-start starting for "multinode-266826-m02" (driver="docker")
	I1226 22:04:41.290954   99116 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:04:41.291026   99116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:04:41.291075   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826-m02
	I1226 22:04:41.306409   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826-m02/id_rsa Username:docker}
	I1226 22:04:41.394960   99116 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:04:41.397689   99116 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1226 22:04:41.397706   99116 command_runner.go:130] > NAME="Ubuntu"
	I1226 22:04:41.397712   99116 command_runner.go:130] > VERSION_ID="22.04"
	I1226 22:04:41.397718   99116 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1226 22:04:41.397723   99116 command_runner.go:130] > VERSION_CODENAME=jammy
	I1226 22:04:41.397727   99116 command_runner.go:130] > ID=ubuntu
	I1226 22:04:41.397731   99116 command_runner.go:130] > ID_LIKE=debian
	I1226 22:04:41.397735   99116 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1226 22:04:41.397741   99116 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1226 22:04:41.397747   99116 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1226 22:04:41.397754   99116 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1226 22:04:41.397761   99116 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1226 22:04:41.397819   99116 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:04:41.397840   99116 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:04:41.397851   99116 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:04:41.397858   99116 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 22:04:41.397869   99116 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/addons for local assets ...
	I1226 22:04:41.397925   99116 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/files for local assets ...
	I1226 22:04:41.397986   99116 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> 139762.pem in /etc/ssl/certs
	I1226 22:04:41.397994   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> /etc/ssl/certs/139762.pem
	I1226 22:04:41.398068   99116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:04:41.405461   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem --> /etc/ssl/certs/139762.pem (1708 bytes)
	I1226 22:04:41.426006   99116 start.go:303] post-start completed in 135.049069ms
	I1226 22:04:41.426312   99116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-266826-m02
	I1226 22:04:41.442047   99116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/config.json ...
	I1226 22:04:41.442313   99116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:04:41.442366   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826-m02
	I1226 22:04:41.458260   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826-m02/id_rsa Username:docker}
	I1226 22:04:41.543001   99116 command_runner.go:130] > 21%!
	(MISSING)I1226 22:04:41.543071   99116 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:04:41.546570   99116 command_runner.go:130] > 233G
	I1226 22:04:41.546778   99116 start.go:128] duration metric: createHost completed in 7.680215502s
	I1226 22:04:41.546795   99116 start.go:83] releasing machines lock for "multinode-266826-m02", held for 7.680352925s
	I1226 22:04:41.546860   99116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-266826-m02
	I1226 22:04:41.564899   99116 out.go:177] * Found network options:
	I1226 22:04:41.566342   99116 out.go:177]   - NO_PROXY=192.168.58.2
	W1226 22:04:41.567601   99116 proxy.go:119] fail to check proxy env: Error ip not in block
	W1226 22:04:41.567647   99116 proxy.go:119] fail to check proxy env: Error ip not in block
	I1226 22:04:41.567706   99116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:04:41.567739   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826-m02
	I1226 22:04:41.567800   99116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:04:41.567855   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826-m02
	I1226 22:04:41.583392   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826-m02/id_rsa Username:docker}
	I1226 22:04:41.583642   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826-m02/id_rsa Username:docker}
	I1226 22:04:41.754592   99116 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1226 22:04:41.798924   99116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:04:41.802908   99116 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1226 22:04:41.802928   99116 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1226 22:04:41.802934   99116 command_runner.go:130] > Device: b0h/176d	Inode: 570038      Links: 1
	I1226 22:04:41.802940   99116 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:04:41.802946   99116 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1226 22:04:41.802953   99116 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1226 22:04:41.802958   99116 command_runner.go:130] > Change: 2023-12-26 21:44:56.364203087 +0000
	I1226 22:04:41.802963   99116 command_runner.go:130] >  Birth: 2023-12-26 21:44:56.364203087 +0000
	I1226 22:04:41.803172   99116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:04:41.819482   99116 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:04:41.819553   99116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:04:41.844660   99116 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1226 22:04:41.844723   99116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1226 22:04:41.844730   99116 start.go:475] detecting cgroup driver to use...
	I1226 22:04:41.844755   99116 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:04:41.844796   99116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:04:41.858350   99116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:04:41.867916   99116 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:04:41.867973   99116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:04:41.879119   99116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:04:41.891264   99116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 22:04:41.971683   99116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:04:41.984078   99116 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1226 22:04:42.054004   99116 docker.go:219] disabling docker service ...
	I1226 22:04:42.054070   99116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:04:42.070021   99116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:04:42.079525   99116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:04:42.089584   99116 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1226 22:04:42.154332   99116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:04:42.164205   99116 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1226 22:04:42.236153   99116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:04:42.245744   99116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:04:42.259368   99116 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1226 22:04:42.259406   99116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1226 22:04:42.259457   99116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:04:42.267435   99116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 22:04:42.267494   99116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:04:42.275764   99116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:04:42.283648   99116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:04:42.291859   99116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 22:04:42.299153   99116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 22:04:42.305797   99116 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1226 22:04:42.305837   99116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 22:04:42.312503   99116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:04:42.385229   99116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 22:04:42.471628   99116 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 22:04:42.471696   99116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 22:04:42.475110   99116 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1226 22:04:42.475131   99116 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1226 22:04:42.475141   99116 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1226 22:04:42.475153   99116 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:04:42.475162   99116 command_runner.go:130] > Access: 2023-12-26 22:04:42.457061039 +0000
	I1226 22:04:42.475173   99116 command_runner.go:130] > Modify: 2023-12-26 22:04:42.457061039 +0000
	I1226 22:04:42.475179   99116 command_runner.go:130] > Change: 2023-12-26 22:04:42.457061039 +0000
	I1226 22:04:42.475183   99116 command_runner.go:130] >  Birth: -
	I1226 22:04:42.475198   99116 start.go:543] Will wait 60s for crictl version
	I1226 22:04:42.475238   99116 ssh_runner.go:195] Run: which crictl
	I1226 22:04:42.477862   99116 command_runner.go:130] > /usr/bin/crictl
	I1226 22:04:42.477985   99116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 22:04:42.507750   99116 command_runner.go:130] > Version:  0.1.0
	I1226 22:04:42.507768   99116 command_runner.go:130] > RuntimeName:  cri-o
	I1226 22:04:42.507773   99116 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1226 22:04:42.507778   99116 command_runner.go:130] > RuntimeApiVersion:  v1
	I1226 22:04:42.507792   99116 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 22:04:42.507837   99116 ssh_runner.go:195] Run: crio --version
	I1226 22:04:42.537833   99116 command_runner.go:130] > crio version 1.24.6
	I1226 22:04:42.537855   99116 command_runner.go:130] > Version:          1.24.6
	I1226 22:04:42.537862   99116 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1226 22:04:42.537866   99116 command_runner.go:130] > GitTreeState:     clean
	I1226 22:04:42.537873   99116 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1226 22:04:42.537877   99116 command_runner.go:130] > GoVersion:        go1.18.2
	I1226 22:04:42.537886   99116 command_runner.go:130] > Compiler:         gc
	I1226 22:04:42.537891   99116 command_runner.go:130] > Platform:         linux/amd64
	I1226 22:04:42.537896   99116 command_runner.go:130] > Linkmode:         dynamic
	I1226 22:04:42.537903   99116 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1226 22:04:42.537911   99116 command_runner.go:130] > SeccompEnabled:   true
	I1226 22:04:42.537915   99116 command_runner.go:130] > AppArmorEnabled:  false
	I1226 22:04:42.539256   99116 ssh_runner.go:195] Run: crio --version
	I1226 22:04:42.570081   99116 command_runner.go:130] > crio version 1.24.6
	I1226 22:04:42.570103   99116 command_runner.go:130] > Version:          1.24.6
	I1226 22:04:42.570110   99116 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1226 22:04:42.570115   99116 command_runner.go:130] > GitTreeState:     clean
	I1226 22:04:42.570127   99116 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1226 22:04:42.570135   99116 command_runner.go:130] > GoVersion:        go1.18.2
	I1226 22:04:42.570145   99116 command_runner.go:130] > Compiler:         gc
	I1226 22:04:42.570152   99116 command_runner.go:130] > Platform:         linux/amd64
	I1226 22:04:42.570161   99116 command_runner.go:130] > Linkmode:         dynamic
	I1226 22:04:42.570172   99116 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1226 22:04:42.570177   99116 command_runner.go:130] > SeccompEnabled:   true
	I1226 22:04:42.570181   99116 command_runner.go:130] > AppArmorEnabled:  false
	I1226 22:04:42.573528   99116 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1226 22:04:42.574993   99116 out.go:177]   - env NO_PROXY=192.168.58.2
	I1226 22:04:42.576332   99116 cli_runner.go:164] Run: docker network inspect multinode-266826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:04:42.591698   99116 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1226 22:04:42.595065   99116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:04:42.604814   99116 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826 for IP: 192.168.58.3
	I1226 22:04:42.604848   99116 certs.go:190] acquiring lock for shared ca certs: {Name:mk3336638bd66053c32b2c7f6f2d1c6a563fd761 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:04:42.604965   99116 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key
	I1226 22:04:42.605004   99116 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key
	I1226 22:04:42.605013   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 22:04:42.605026   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1226 22:04:42.605043   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 22:04:42.605055   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 22:04:42.605100   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976.pem (1338 bytes)
	W1226 22:04:42.605129   99116 certs.go:433] ignoring /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976_empty.pem, impossibly tiny 0 bytes
	I1226 22:04:42.605138   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem (1679 bytes)
	I1226 22:04:42.605160   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem (1082 bytes)
	I1226 22:04:42.605183   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem (1123 bytes)
	I1226 22:04:42.605203   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem (1679 bytes)
	I1226 22:04:42.605239   99116 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem (1708 bytes)
	I1226 22:04:42.605267   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> /usr/share/ca-certificates/139762.pem
	I1226 22:04:42.605280   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:04:42.605292   99116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976.pem -> /usr/share/ca-certificates/13976.pem
	I1226 22:04:42.605580   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 22:04:42.625809   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 22:04:42.646199   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 22:04:42.666437   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 22:04:42.686942   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem --> /usr/share/ca-certificates/139762.pem (1708 bytes)
	I1226 22:04:42.707430   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 22:04:42.727006   99116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/certs/13976.pem --> /usr/share/ca-certificates/13976.pem (1338 bytes)
	I1226 22:04:42.747913   99116 ssh_runner.go:195] Run: openssl version
	I1226 22:04:42.752508   99116 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1226 22:04:42.752681   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 22:04:42.760554   99116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:04:42.763778   99116 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:04:42.763830   99116 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:04:42.763863   99116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:04:42.769908   99116 command_runner.go:130] > b5213941
	I1226 22:04:42.769980   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 22:04:42.778009   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13976.pem && ln -fs /usr/share/ca-certificates/13976.pem /etc/ssl/certs/13976.pem"
	I1226 22:04:42.785863   99116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13976.pem
	I1226 22:04:42.788814   99116 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 26 21:51 /usr/share/ca-certificates/13976.pem
	I1226 22:04:42.788871   99116 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 21:51 /usr/share/ca-certificates/13976.pem
	I1226 22:04:42.788913   99116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13976.pem
	I1226 22:04:42.794757   99116 command_runner.go:130] > 51391683
	I1226 22:04:42.795073   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13976.pem /etc/ssl/certs/51391683.0"
	I1226 22:04:42.802960   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139762.pem && ln -fs /usr/share/ca-certificates/139762.pem /etc/ssl/certs/139762.pem"
	I1226 22:04:42.810829   99116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139762.pem
	I1226 22:04:42.813697   99116 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 26 21:51 /usr/share/ca-certificates/139762.pem
	I1226 22:04:42.813748   99116 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 21:51 /usr/share/ca-certificates/139762.pem
	I1226 22:04:42.813787   99116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139762.pem
	I1226 22:04:42.819420   99116 command_runner.go:130] > 3ec20f2e
	I1226 22:04:42.819673   99116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139762.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 22:04:42.827437   99116 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 22:04:42.830111   99116 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:04:42.830183   99116 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:04:42.830254   99116 ssh_runner.go:195] Run: crio config
	I1226 22:04:42.865805   99116 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1226 22:04:42.865836   99116 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1226 22:04:42.865847   99116 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1226 22:04:42.865853   99116 command_runner.go:130] > #
	I1226 22:04:42.865863   99116 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1226 22:04:42.865872   99116 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1226 22:04:42.865881   99116 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1226 22:04:42.865893   99116 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1226 22:04:42.865905   99116 command_runner.go:130] > # reload'.
	I1226 22:04:42.865915   99116 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1226 22:04:42.865929   99116 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1226 22:04:42.865942   99116 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1226 22:04:42.865956   99116 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1226 22:04:42.865965   99116 command_runner.go:130] > [crio]
	I1226 22:04:42.865975   99116 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1226 22:04:42.865986   99116 command_runner.go:130] > # containers images, in this directory.
	I1226 22:04:42.865999   99116 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1226 22:04:42.866010   99116 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1226 22:04:42.866022   99116 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1226 22:04:42.866033   99116 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1226 22:04:42.866047   99116 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1226 22:04:42.866058   99116 command_runner.go:130] > # storage_driver = "vfs"
	I1226 22:04:42.866071   99116 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1226 22:04:42.866081   99116 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1226 22:04:42.866091   99116 command_runner.go:130] > # storage_option = [
	I1226 22:04:42.866097   99116 command_runner.go:130] > # ]
	I1226 22:04:42.866113   99116 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1226 22:04:42.866128   99116 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1226 22:04:42.866139   99116 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1226 22:04:42.866153   99116 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1226 22:04:42.866166   99116 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1226 22:04:42.866176   99116 command_runner.go:130] > # always happen on a node reboot
	I1226 22:04:42.866185   99116 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1226 22:04:42.866199   99116 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1226 22:04:42.866210   99116 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1226 22:04:42.866222   99116 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1226 22:04:42.866242   99116 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1226 22:04:42.866259   99116 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1226 22:04:42.866273   99116 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1226 22:04:42.866281   99116 command_runner.go:130] > # internal_wipe = true
	I1226 22:04:42.866294   99116 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1226 22:04:42.866303   99116 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1226 22:04:42.866311   99116 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1226 22:04:42.866327   99116 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1226 22:04:42.866337   99116 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1226 22:04:42.866375   99116 command_runner.go:130] > [crio.api]
	I1226 22:04:42.866385   99116 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1226 22:04:42.866396   99116 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1226 22:04:42.866407   99116 command_runner.go:130] > # IP address on which the stream server will listen.
	I1226 22:04:42.866414   99116 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1226 22:04:42.866428   99116 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1226 22:04:42.866440   99116 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1226 22:04:42.866449   99116 command_runner.go:130] > # stream_port = "0"
	I1226 22:04:42.866459   99116 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1226 22:04:42.866466   99116 command_runner.go:130] > # stream_enable_tls = false
	I1226 22:04:42.866476   99116 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1226 22:04:42.866488   99116 command_runner.go:130] > # stream_idle_timeout = ""
	I1226 22:04:42.866500   99116 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1226 22:04:42.866513   99116 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1226 22:04:42.866519   99116 command_runner.go:130] > # minutes.
	I1226 22:04:42.866524   99116 command_runner.go:130] > # stream_tls_cert = ""
	I1226 22:04:42.866539   99116 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1226 22:04:42.866552   99116 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1226 22:04:42.866563   99116 command_runner.go:130] > # stream_tls_key = ""
	I1226 22:04:42.866574   99116 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1226 22:04:42.866588   99116 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1226 22:04:42.866599   99116 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1226 22:04:42.866607   99116 command_runner.go:130] > # stream_tls_ca = ""
	I1226 22:04:42.866619   99116 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1226 22:04:42.866631   99116 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1226 22:04:42.866645   99116 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1226 22:04:42.866674   99116 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1226 22:04:42.866701   99116 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1226 22:04:42.866714   99116 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1226 22:04:42.866720   99116 command_runner.go:130] > [crio.runtime]
	I1226 22:04:42.866730   99116 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1226 22:04:42.866745   99116 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1226 22:04:42.866752   99116 command_runner.go:130] > # "nofile=1024:2048"
	I1226 22:04:42.866769   99116 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1226 22:04:42.866776   99116 command_runner.go:130] > # default_ulimits = [
	I1226 22:04:42.866782   99116 command_runner.go:130] > # ]
	I1226 22:04:42.866798   99116 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1226 22:04:42.866805   99116 command_runner.go:130] > # no_pivot = false
	I1226 22:04:42.866815   99116 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1226 22:04:42.866827   99116 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1226 22:04:42.866840   99116 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1226 22:04:42.866851   99116 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1226 22:04:42.866864   99116 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1226 22:04:42.866880   99116 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1226 22:04:42.866889   99116 command_runner.go:130] > # conmon = ""
	I1226 22:04:42.866901   99116 command_runner.go:130] > # Cgroup setting for conmon
	I1226 22:04:42.866922   99116 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1226 22:04:42.866932   99116 command_runner.go:130] > conmon_cgroup = "pod"
	I1226 22:04:42.866946   99116 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1226 22:04:42.866958   99116 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1226 22:04:42.866972   99116 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1226 22:04:42.866978   99116 command_runner.go:130] > # conmon_env = [
	I1226 22:04:42.866983   99116 command_runner.go:130] > # ]
	I1226 22:04:42.866993   99116 command_runner.go:130] > # Additional environment variables to set for all the
	I1226 22:04:42.867007   99116 command_runner.go:130] > # containers. These are overridden if set in the
	I1226 22:04:42.867017   99116 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1226 22:04:42.867033   99116 command_runner.go:130] > # default_env = [
	I1226 22:04:42.867037   99116 command_runner.go:130] > # ]
	I1226 22:04:42.867047   99116 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1226 22:04:42.867052   99116 command_runner.go:130] > # selinux = false
	I1226 22:04:42.867060   99116 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1226 22:04:42.867071   99116 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1226 22:04:42.867079   99116 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1226 22:04:42.867087   99116 command_runner.go:130] > # seccomp_profile = ""
	I1226 22:04:42.867095   99116 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1226 22:04:42.867105   99116 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1226 22:04:42.867113   99116 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1226 22:04:42.867121   99116 command_runner.go:130] > # which might increase security.
	I1226 22:04:42.867128   99116 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1226 22:04:42.867139   99116 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1226 22:04:42.867154   99116 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1226 22:04:42.867166   99116 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1226 22:04:42.867177   99116 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1226 22:04:42.867186   99116 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:04:42.867195   99116 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1226 22:04:42.867205   99116 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1226 22:04:42.867212   99116 command_runner.go:130] > # the cgroup blockio controller.
	I1226 22:04:42.867220   99116 command_runner.go:130] > # blockio_config_file = ""
	I1226 22:04:42.867231   99116 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1226 22:04:42.867240   99116 command_runner.go:130] > # irqbalance daemon.
	I1226 22:04:42.867251   99116 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1226 22:04:42.867265   99116 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1226 22:04:42.867276   99116 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:04:42.867286   99116 command_runner.go:130] > # rdt_config_file = ""
	I1226 22:04:42.867298   99116 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1226 22:04:42.867307   99116 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1226 22:04:42.867317   99116 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1226 22:04:42.867324   99116 command_runner.go:130] > # separate_pull_cgroup = ""
	I1226 22:04:42.867330   99116 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1226 22:04:42.867339   99116 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1226 22:04:42.867352   99116 command_runner.go:130] > # will be added.
	I1226 22:04:42.867357   99116 command_runner.go:130] > # default_capabilities = [
	I1226 22:04:42.867362   99116 command_runner.go:130] > # 	"CHOWN",
	I1226 22:04:42.867367   99116 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1226 22:04:42.867373   99116 command_runner.go:130] > # 	"FSETID",
	I1226 22:04:42.867377   99116 command_runner.go:130] > # 	"FOWNER",
	I1226 22:04:42.867380   99116 command_runner.go:130] > # 	"SETGID",
	I1226 22:04:42.867384   99116 command_runner.go:130] > # 	"SETUID",
	I1226 22:04:42.867390   99116 command_runner.go:130] > # 	"SETPCAP",
	I1226 22:04:42.867395   99116 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1226 22:04:42.867401   99116 command_runner.go:130] > # 	"KILL",
	I1226 22:04:42.867405   99116 command_runner.go:130] > # ]
	I1226 22:04:42.867414   99116 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1226 22:04:42.867424   99116 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1226 22:04:42.867430   99116 command_runner.go:130] > # add_inheritable_capabilities = true
	I1226 22:04:42.867437   99116 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1226 22:04:42.867445   99116 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1226 22:04:42.867451   99116 command_runner.go:130] > # default_sysctls = [
	I1226 22:04:42.867455   99116 command_runner.go:130] > # ]
	I1226 22:04:42.867461   99116 command_runner.go:130] > # List of devices on the host that a
	I1226 22:04:42.867467   99116 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1226 22:04:42.867474   99116 command_runner.go:130] > # allowed_devices = [
	I1226 22:04:42.867478   99116 command_runner.go:130] > # 	"/dev/fuse",
	I1226 22:04:42.867484   99116 command_runner.go:130] > # ]
	I1226 22:04:42.867490   99116 command_runner.go:130] > # List of additional devices. specified as
	I1226 22:04:42.867515   99116 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1226 22:04:42.867525   99116 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1226 22:04:42.867531   99116 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1226 22:04:42.867537   99116 command_runner.go:130] > # additional_devices = [
	I1226 22:04:42.867541   99116 command_runner.go:130] > # ]
	I1226 22:04:42.867549   99116 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1226 22:04:42.867553   99116 command_runner.go:130] > # cdi_spec_dirs = [
	I1226 22:04:42.867559   99116 command_runner.go:130] > # 	"/etc/cdi",
	I1226 22:04:42.867563   99116 command_runner.go:130] > # 	"/var/run/cdi",
	I1226 22:04:42.867569   99116 command_runner.go:130] > # ]
	I1226 22:04:42.867575   99116 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1226 22:04:42.867583   99116 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1226 22:04:42.867590   99116 command_runner.go:130] > # Defaults to false.
	I1226 22:04:42.867595   99116 command_runner.go:130] > # device_ownership_from_security_context = false
	I1226 22:04:42.867604   99116 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1226 22:04:42.867612   99116 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1226 22:04:42.867615   99116 command_runner.go:130] > # hooks_dir = [
	I1226 22:04:42.867622   99116 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1226 22:04:42.867626   99116 command_runner.go:130] > # ]
	I1226 22:04:42.867633   99116 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1226 22:04:42.867641   99116 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1226 22:04:42.867649   99116 command_runner.go:130] > # its default mounts from the following two files:
	I1226 22:04:42.867654   99116 command_runner.go:130] > #
	I1226 22:04:42.867661   99116 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1226 22:04:42.867669   99116 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1226 22:04:42.867678   99116 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1226 22:04:42.867683   99116 command_runner.go:130] > #
	I1226 22:04:42.867690   99116 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1226 22:04:42.867698   99116 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1226 22:04:42.867705   99116 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1226 22:04:42.867712   99116 command_runner.go:130] > #      only add mounts it finds in this file.
	I1226 22:04:42.867715   99116 command_runner.go:130] > #
	I1226 22:04:42.867722   99116 command_runner.go:130] > # default_mounts_file = ""
	I1226 22:04:42.867727   99116 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1226 22:04:42.867736   99116 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1226 22:04:42.867742   99116 command_runner.go:130] > # pids_limit = 0
	I1226 22:04:42.867749   99116 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1226 22:04:42.867757   99116 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1226 22:04:42.867765   99116 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1226 22:04:42.867775   99116 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1226 22:04:42.867782   99116 command_runner.go:130] > # log_size_max = -1
	I1226 22:04:42.867789   99116 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1226 22:04:42.867795   99116 command_runner.go:130] > # log_to_journald = false
	I1226 22:04:42.867802   99116 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1226 22:04:42.867809   99116 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1226 22:04:42.867817   99116 command_runner.go:130] > # Path to directory for container attach sockets.
	I1226 22:04:42.867822   99116 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1226 22:04:42.867829   99116 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1226 22:04:42.867836   99116 command_runner.go:130] > # bind_mount_prefix = ""
	I1226 22:04:42.867842   99116 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1226 22:04:42.867849   99116 command_runner.go:130] > # read_only = false
	I1226 22:04:42.867855   99116 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1226 22:04:42.867863   99116 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1226 22:04:42.867870   99116 command_runner.go:130] > # live configuration reload.
	I1226 22:04:42.867874   99116 command_runner.go:130] > # log_level = "info"
	I1226 22:04:42.867882   99116 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1226 22:04:42.867887   99116 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:04:42.867893   99116 command_runner.go:130] > # log_filter = ""
	I1226 22:04:42.867899   99116 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1226 22:04:42.867907   99116 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1226 22:04:42.867914   99116 command_runner.go:130] > # separated by comma.
	I1226 22:04:42.867918   99116 command_runner.go:130] > # uid_mappings = ""
	I1226 22:04:42.867930   99116 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1226 22:04:42.867943   99116 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1226 22:04:42.867953   99116 command_runner.go:130] > # separated by comma.
	I1226 22:04:42.867962   99116 command_runner.go:130] > # gid_mappings = ""
	I1226 22:04:42.867970   99116 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1226 22:04:42.867979   99116 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1226 22:04:42.867987   99116 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1226 22:04:42.867994   99116 command_runner.go:130] > # minimum_mappable_uid = -1
	I1226 22:04:42.868000   99116 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1226 22:04:42.868009   99116 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1226 22:04:42.868018   99116 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1226 22:04:42.868025   99116 command_runner.go:130] > # minimum_mappable_gid = -1
	I1226 22:04:42.868031   99116 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1226 22:04:42.868040   99116 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1226 22:04:42.868046   99116 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1226 22:04:42.868052   99116 command_runner.go:130] > # ctr_stop_timeout = 30
	I1226 22:04:42.868058   99116 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1226 22:04:42.868067   99116 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1226 22:04:42.868085   99116 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1226 22:04:42.868090   99116 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1226 22:04:42.868096   99116 command_runner.go:130] > # drop_infra_ctr = true
	I1226 22:04:42.868102   99116 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1226 22:04:42.868110   99116 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1226 22:04:42.868118   99116 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1226 22:04:42.868124   99116 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1226 22:04:42.868130   99116 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1226 22:04:42.868135   99116 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1226 22:04:42.868140   99116 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1226 22:04:42.868147   99116 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1226 22:04:42.868154   99116 command_runner.go:130] > # pinns_path = ""
	I1226 22:04:42.868161   99116 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1226 22:04:42.868169   99116 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1226 22:04:42.868178   99116 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1226 22:04:42.868184   99116 command_runner.go:130] > # default_runtime = "runc"
	I1226 22:04:42.868189   99116 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1226 22:04:42.868199   99116 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1226 22:04:42.868209   99116 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1226 22:04:42.868217   99116 command_runner.go:130] > # creation as a file is not desired either.
	I1226 22:04:42.868226   99116 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1226 22:04:42.868233   99116 command_runner.go:130] > # the hostname is being managed dynamically.
	I1226 22:04:42.868238   99116 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1226 22:04:42.868242   99116 command_runner.go:130] > # ]
	I1226 22:04:42.868248   99116 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1226 22:04:42.868256   99116 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1226 22:04:42.868263   99116 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1226 22:04:42.868272   99116 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1226 22:04:42.868275   99116 command_runner.go:130] > #
	I1226 22:04:42.868280   99116 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1226 22:04:42.868287   99116 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1226 22:04:42.868291   99116 command_runner.go:130] > #  runtime_type = "oci"
	I1226 22:04:42.868298   99116 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1226 22:04:42.868303   99116 command_runner.go:130] > #  privileged_without_host_devices = false
	I1226 22:04:42.868309   99116 command_runner.go:130] > #  allowed_annotations = []
	I1226 22:04:42.868313   99116 command_runner.go:130] > # Where:
	I1226 22:04:42.868321   99116 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1226 22:04:42.868327   99116 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1226 22:04:42.868336   99116 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1226 22:04:42.868348   99116 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1226 22:04:42.868354   99116 command_runner.go:130] > #   in $PATH.
	I1226 22:04:42.868360   99116 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1226 22:04:42.868367   99116 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1226 22:04:42.868374   99116 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1226 22:04:42.868380   99116 command_runner.go:130] > #   state.
	I1226 22:04:42.868386   99116 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1226 22:04:42.868394   99116 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1226 22:04:42.868400   99116 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1226 22:04:42.868408   99116 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1226 22:04:42.868416   99116 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1226 22:04:42.868423   99116 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1226 22:04:42.868430   99116 command_runner.go:130] > #   The currently recognized values are:
	I1226 22:04:42.868437   99116 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1226 22:04:42.868444   99116 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1226 22:04:42.868452   99116 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1226 22:04:42.868459   99116 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1226 22:04:42.868469   99116 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1226 22:04:42.868478   99116 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1226 22:04:42.868484   99116 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1226 22:04:42.868493   99116 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1226 22:04:42.868497   99116 command_runner.go:130] > #   should be moved to the container's cgroup
	I1226 22:04:42.868504   99116 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1226 22:04:42.868509   99116 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1226 22:04:42.868516   99116 command_runner.go:130] > runtime_type = "oci"
	I1226 22:04:42.868520   99116 command_runner.go:130] > runtime_root = "/run/runc"
	I1226 22:04:42.868525   99116 command_runner.go:130] > runtime_config_path = ""
	I1226 22:04:42.868529   99116 command_runner.go:130] > monitor_path = ""
	I1226 22:04:42.868534   99116 command_runner.go:130] > monitor_cgroup = ""
	I1226 22:04:42.868539   99116 command_runner.go:130] > monitor_exec_cgroup = ""
	I1226 22:04:42.868564   99116 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1226 22:04:42.868570   99116 command_runner.go:130] > # running containers
	I1226 22:04:42.868575   99116 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1226 22:04:42.868583   99116 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1226 22:04:42.868590   99116 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1226 22:04:42.868597   99116 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1226 22:04:42.868605   99116 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1226 22:04:42.868610   99116 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1226 22:04:42.868615   99116 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1226 22:04:42.868620   99116 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1226 22:04:42.868627   99116 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1226 22:04:42.868631   99116 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1226 22:04:42.868640   99116 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1226 22:04:42.868645   99116 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1226 22:04:42.868654   99116 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1226 22:04:42.868661   99116 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1226 22:04:42.868671   99116 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1226 22:04:42.868679   99116 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1226 22:04:42.868688   99116 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1226 22:04:42.868698   99116 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1226 22:04:42.868706   99116 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1226 22:04:42.868713   99116 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1226 22:04:42.868719   99116 command_runner.go:130] > # Example:
	I1226 22:04:42.868723   99116 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1226 22:04:42.868728   99116 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1226 22:04:42.868734   99116 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1226 22:04:42.868745   99116 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1226 22:04:42.868751   99116 command_runner.go:130] > # cpuset = 0
	I1226 22:04:42.868756   99116 command_runner.go:130] > # cpushares = "0-1"
	I1226 22:04:42.868762   99116 command_runner.go:130] > # Where:
	I1226 22:04:42.868772   99116 command_runner.go:130] > # The workload name is workload-type.
	I1226 22:04:42.868782   99116 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1226 22:04:42.868789   99116 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1226 22:04:42.868795   99116 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1226 22:04:42.868804   99116 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1226 22:04:42.868810   99116 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1226 22:04:42.868814   99116 command_runner.go:130] > # 
	I1226 22:04:42.868821   99116 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1226 22:04:42.868826   99116 command_runner.go:130] > #
	I1226 22:04:42.868832   99116 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1226 22:04:42.868840   99116 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1226 22:04:42.868847   99116 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1226 22:04:42.868855   99116 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1226 22:04:42.868861   99116 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1226 22:04:42.868866   99116 command_runner.go:130] > [crio.image]
	I1226 22:04:42.868872   99116 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1226 22:04:42.868879   99116 command_runner.go:130] > # default_transport = "docker://"
	I1226 22:04:42.868885   99116 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1226 22:04:42.868894   99116 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1226 22:04:42.868898   99116 command_runner.go:130] > # global_auth_file = ""
	I1226 22:04:42.868906   99116 command_runner.go:130] > # The image used to instantiate infra containers.
	I1226 22:04:42.868911   99116 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:04:42.868917   99116 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1226 22:04:42.868923   99116 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1226 22:04:42.868931   99116 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1226 22:04:42.868936   99116 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:04:42.868945   99116 command_runner.go:130] > # pause_image_auth_file = ""
	I1226 22:04:42.868950   99116 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1226 22:04:42.868959   99116 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1226 22:04:42.868965   99116 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1226 22:04:42.868972   99116 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1226 22:04:42.868977   99116 command_runner.go:130] > # pause_command = "/pause"
	I1226 22:04:42.868986   99116 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1226 22:04:42.868993   99116 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1226 22:04:42.869000   99116 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1226 22:04:42.869006   99116 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1226 22:04:42.869014   99116 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1226 22:04:42.869019   99116 command_runner.go:130] > # signature_policy = ""
	I1226 22:04:42.869027   99116 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1226 22:04:42.869036   99116 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1226 22:04:42.869040   99116 command_runner.go:130] > # changing them here.
	I1226 22:04:42.869044   99116 command_runner.go:130] > # insecure_registries = [
	I1226 22:04:42.869048   99116 command_runner.go:130] > # ]
	I1226 22:04:42.869054   99116 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1226 22:04:42.869061   99116 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1226 22:04:42.869065   99116 command_runner.go:130] > # image_volumes = "mkdir"
	I1226 22:04:42.869072   99116 command_runner.go:130] > # Temporary directory to use for storing big files
	I1226 22:04:42.869077   99116 command_runner.go:130] > # big_files_temporary_dir = ""
	I1226 22:04:42.869084   99116 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1226 22:04:42.869094   99116 command_runner.go:130] > # CNI plugins.
	I1226 22:04:42.869100   99116 command_runner.go:130] > [crio.network]
	I1226 22:04:42.869112   99116 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1226 22:04:42.869124   99116 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1226 22:04:42.869133   99116 command_runner.go:130] > # cni_default_network = ""
	I1226 22:04:42.869141   99116 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1226 22:04:42.869151   99116 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1226 22:04:42.869161   99116 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1226 22:04:42.869170   99116 command_runner.go:130] > # plugin_dirs = [
	I1226 22:04:42.869177   99116 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1226 22:04:42.869185   99116 command_runner.go:130] > # ]
	I1226 22:04:42.869195   99116 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1226 22:04:42.869203   99116 command_runner.go:130] > [crio.metrics]
	I1226 22:04:42.869209   99116 command_runner.go:130] > # Globally enable or disable metrics support.
	I1226 22:04:42.869216   99116 command_runner.go:130] > # enable_metrics = false
	I1226 22:04:42.869221   99116 command_runner.go:130] > # Specify enabled metrics collectors.
	I1226 22:04:42.869228   99116 command_runner.go:130] > # Per default all metrics are enabled.
	I1226 22:04:42.869234   99116 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1226 22:04:42.869242   99116 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1226 22:04:42.869248   99116 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1226 22:04:42.869255   99116 command_runner.go:130] > # metrics_collectors = [
	I1226 22:04:42.869259   99116 command_runner.go:130] > # 	"operations",
	I1226 22:04:42.869265   99116 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1226 22:04:42.869271   99116 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1226 22:04:42.869275   99116 command_runner.go:130] > # 	"operations_errors",
	I1226 22:04:42.869281   99116 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1226 22:04:42.869288   99116 command_runner.go:130] > # 	"image_pulls_by_name",
	I1226 22:04:42.869298   99116 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1226 22:04:42.869309   99116 command_runner.go:130] > # 	"image_pulls_failures",
	I1226 22:04:42.869317   99116 command_runner.go:130] > # 	"image_pulls_successes",
	I1226 22:04:42.869327   99116 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1226 22:04:42.869333   99116 command_runner.go:130] > # 	"image_layer_reuse",
	I1226 22:04:42.869337   99116 command_runner.go:130] > # 	"containers_oom_total",
	I1226 22:04:42.869348   99116 command_runner.go:130] > # 	"containers_oom",
	I1226 22:04:42.869354   99116 command_runner.go:130] > # 	"processes_defunct",
	I1226 22:04:42.869359   99116 command_runner.go:130] > # 	"operations_total",
	I1226 22:04:42.869365   99116 command_runner.go:130] > # 	"operations_latency_seconds",
	I1226 22:04:42.869370   99116 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1226 22:04:42.869377   99116 command_runner.go:130] > # 	"operations_errors_total",
	I1226 22:04:42.869382   99116 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1226 22:04:42.869386   99116 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1226 22:04:42.869393   99116 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1226 22:04:42.869398   99116 command_runner.go:130] > # 	"image_pulls_success_total",
	I1226 22:04:42.869404   99116 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1226 22:04:42.869408   99116 command_runner.go:130] > # 	"containers_oom_count_total",
	I1226 22:04:42.869414   99116 command_runner.go:130] > # ]
	I1226 22:04:42.869419   99116 command_runner.go:130] > # The port on which the metrics server will listen.
	I1226 22:04:42.869423   99116 command_runner.go:130] > # metrics_port = 9090
	I1226 22:04:42.869429   99116 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1226 22:04:42.869435   99116 command_runner.go:130] > # metrics_socket = ""
	I1226 22:04:42.869440   99116 command_runner.go:130] > # The certificate for the secure metrics server.
	I1226 22:04:42.869448   99116 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1226 22:04:42.869454   99116 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1226 22:04:42.869462   99116 command_runner.go:130] > # certificate on any modification event.
	I1226 22:04:42.869466   99116 command_runner.go:130] > # metrics_cert = ""
	I1226 22:04:42.869474   99116 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1226 22:04:42.869479   99116 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1226 22:04:42.869483   99116 command_runner.go:130] > # metrics_key = ""
	I1226 22:04:42.869491   99116 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1226 22:04:42.869495   99116 command_runner.go:130] > [crio.tracing]
	I1226 22:04:42.869503   99116 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1226 22:04:42.869508   99116 command_runner.go:130] > # enable_tracing = false
	I1226 22:04:42.869515   99116 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1226 22:04:42.869520   99116 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1226 22:04:42.869527   99116 command_runner.go:130] > # Number of samples to collect per million spans.
	I1226 22:04:42.869533   99116 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1226 22:04:42.869540   99116 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1226 22:04:42.869544   99116 command_runner.go:130] > [crio.stats]
	I1226 22:04:42.869550   99116 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1226 22:04:42.869558   99116 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1226 22:04:42.869562   99116 command_runner.go:130] > # stats_collection_period = 0
	I1226 22:04:42.869592   99116 command_runner.go:130] ! time="2023-12-26 22:04:42.863507948Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1226 22:04:42.869605   99116 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1226 22:04:42.869659   99116 cni.go:84] Creating CNI manager for ""
	I1226 22:04:42.869669   99116 cni.go:136] 2 nodes found, recommending kindnet
	I1226 22:04:42.869678   99116 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 22:04:42.869695   99116 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-266826 NodeName:multinode-266826-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 22:04:42.869800   99116 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-266826-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 22:04:42.869849   99116 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-266826-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-266826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 22:04:42.869893   99116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 22:04:42.877827   99116 command_runner.go:130] > kubeadm
	I1226 22:04:42.877847   99116 command_runner.go:130] > kubectl
	I1226 22:04:42.877859   99116 command_runner.go:130] > kubelet
	I1226 22:04:42.877887   99116 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 22:04:42.877934   99116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1226 22:04:42.885223   99116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1226 22:04:42.900346   99116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 22:04:42.916661   99116 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1226 22:04:42.919905   99116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:04:42.929535   99116 host.go:66] Checking if "multinode-266826" exists ...
	I1226 22:04:42.929753   99116 config.go:182] Loaded profile config "multinode-266826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:04:42.929800   99116 start.go:304] JoinCluster: &{Name:multinode-266826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-266826 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:04:42.929883   99116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1226 22:04:42.929923   99116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:04:42.945552   99116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa Username:docker}
	I1226 22:04:43.081521   99116 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token p1uts5.z3ejk76hjqkvr5t6 --discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc 
	I1226 22:04:43.081564   99116 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1226 22:04:43.081588   99116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p1uts5.z3ejk76hjqkvr5t6 --discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-266826-m02"
	I1226 22:04:43.113807   99116 command_runner.go:130] ! W1226 22:04:43.113405    1109 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1226 22:04:43.140524   99116 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1226 22:04:43.202753   99116 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 22:04:45.827572   99116 command_runner.go:130] > [preflight] Running pre-flight checks
	I1226 22:04:45.827649   99116 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1226 22:04:45.827676   99116 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1226 22:04:45.827689   99116 command_runner.go:130] > OS: Linux
	I1226 22:04:45.827699   99116 command_runner.go:130] > CGROUPS_CPU: enabled
	I1226 22:04:45.827710   99116 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1226 22:04:45.827721   99116 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1226 22:04:45.827730   99116 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1226 22:04:45.827745   99116 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1226 22:04:45.827760   99116 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1226 22:04:45.827779   99116 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1226 22:04:45.827791   99116 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1226 22:04:45.827800   99116 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1226 22:04:45.827815   99116 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1226 22:04:45.827831   99116 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1226 22:04:45.827845   99116 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 22:04:45.827861   99116 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 22:04:45.827869   99116 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1226 22:04:45.827896   99116 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1226 22:04:45.827920   99116 command_runner.go:130] > This node has joined the cluster:
	I1226 22:04:45.827944   99116 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1226 22:04:45.827961   99116 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1226 22:04:45.827973   99116 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1226 22:04:45.828001   99116 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p1uts5.z3ejk76hjqkvr5t6 --discovery-token-ca-cert-hash sha256:cbd3139c85275a56e0c84c386786206b386d7a2d9a6f7a7acac9428358424ddc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-266826-m02": (2.746395163s)
	I1226 22:04:45.828036   99116 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1226 22:04:45.910487   99116 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1226 22:04:45.982740   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=multinode-266826 minikube.k8s.io/updated_at=2023_12_26T22_04_45_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:04:46.051262   99116 command_runner.go:130] > node/multinode-266826-m02 labeled
	I1226 22:04:46.053990   99116 start.go:306] JoinCluster complete in 3.124180112s
	I1226 22:04:46.054012   99116 cni.go:84] Creating CNI manager for ""
	I1226 22:04:46.054019   99116 cni.go:136] 2 nodes found, recommending kindnet
	I1226 22:04:46.054082   99116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 22:04:46.057403   99116 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1226 22:04:46.057431   99116 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I1226 22:04:46.057443   99116 command_runner.go:130] > Device: 33h/51d	Inode: 573812      Links: 1
	I1226 22:04:46.057456   99116 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:04:46.057469   99116 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I1226 22:04:46.057480   99116 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I1226 22:04:46.057489   99116 command_runner.go:130] > Change: 2023-12-26 21:44:56.780245364 +0000
	I1226 22:04:46.057496   99116 command_runner.go:130] >  Birth: 2023-12-26 21:44:56.756242925 +0000
	I1226 22:04:46.057547   99116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 22:04:46.057557   99116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 22:04:46.073714   99116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 22:04:46.274406   99116 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1226 22:04:46.279520   99116 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1226 22:04:46.281720   99116 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1226 22:04:46.292263   99116 command_runner.go:130] > daemonset.apps/kindnet configured
	I1226 22:04:46.296564   99116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 22:04:46.296796   99116 kapi.go:59] client config for multinode-266826: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.key", CAFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:04:46.297216   99116 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:04:46.297237   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:46.297249   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:46.297266   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:46.299316   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:46.299332   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:46.299342   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:46.299348   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:46.299353   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:46.299358   99116 round_trippers.go:580]     Content-Length: 291
	I1226 22:04:46.299363   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:46 GMT
	I1226 22:04:46.299370   99116 round_trippers.go:580]     Audit-Id: 7c399d10-7c5b-4072-a818-6e7c40c50d10
	I1226 22:04:46.299375   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:46.299397   99116 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7320c04a-855a-498b-b992-233853eb9cc8","resourceVersion":"431","creationTimestamp":"2023-12-26T22:04:15Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1226 22:04:46.299473   99116 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-266826" context rescaled to 1 replicas
	I1226 22:04:46.299504   99116 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1226 22:04:46.302738   99116 out.go:177] * Verifying Kubernetes components...
	I1226 22:04:46.304310   99116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:04:46.315323   99116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 22:04:46.315532   99116 kapi.go:59] client config for multinode-266826: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/profiles/multinode-266826/client.key", CAFile:"/home/jenkins/minikube-integration/17857-7214/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:04:46.315759   99116 node_ready.go:35] waiting up to 6m0s for node "multinode-266826-m02" to be "Ready" ...
	I1226 22:04:46.315827   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:46.315835   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:46.315842   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:46.315849   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:46.318021   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:46.318039   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:46.318045   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:46 GMT
	I1226 22:04:46.318052   99116 round_trippers.go:580]     Audit-Id: 089032ae-048d-4db9-a286-aff5a5179684
	I1226 22:04:46.318057   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:46.318062   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:46.318067   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:46.318073   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:46.318178   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"472","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1226 22:04:46.816029   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:46.816049   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:46.816056   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:46.816062   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:46.818343   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:46.818365   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:46.818374   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:46 GMT
	I1226 22:04:46.818382   99116 round_trippers.go:580]     Audit-Id: fa36c573-c1ce-446b-81e9-e8532418c6fa
	I1226 22:04:46.818390   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:46.818398   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:46.818407   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:46.818418   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:46.818522   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"472","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1226 22:04:47.315998   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:47.316024   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:47.316038   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:47.316046   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:47.318343   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:47.318362   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:47.318368   99116 round_trippers.go:580]     Audit-Id: 16c781d2-7a1c-40b4-bc33-5ca6856d11e2
	I1226 22:04:47.318374   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:47.318379   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:47.318384   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:47.318389   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:47.318397   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:47 GMT
	I1226 22:04:47.318509   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"472","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1226 22:04:47.816036   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:47.816058   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:47.816065   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:47.816071   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:47.818243   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:47.818270   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:47.818280   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:47 GMT
	I1226 22:04:47.818289   99116 round_trippers.go:580]     Audit-Id: 8d842540-1d9e-4c02-85a5-acde465f8fd6
	I1226 22:04:47.818297   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:47.818303   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:47.818308   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:47.818316   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:47.818440   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"472","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1226 22:04:48.316908   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:48.316934   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:48.316946   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:48.316956   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:48.319556   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:48.319582   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:48.319593   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:48.319601   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:48.319609   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:48.319618   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:48 GMT
	I1226 22:04:48.319634   99116 round_trippers.go:580]     Audit-Id: 90b67e62-c098-4f12-8f2e-b4ddd20d2f19
	I1226 22:04:48.319643   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:48.319787   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"472","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1226 22:04:48.320190   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:04:48.815989   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:48.816009   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:48.816017   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:48.816023   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:48.818071   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:48.818096   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:48.818106   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:48.818115   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:48.818123   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:48.818131   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:48 GMT
	I1226 22:04:48.818140   99116 round_trippers.go:580]     Audit-Id: ba8747f5-18d6-41ed-89d7-accfbbec154c
	I1226 22:04:48.818150   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:48.818286   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"472","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1226 22:04:49.316881   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:49.316905   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:49.316913   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:49.316919   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:49.319337   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:49.319356   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:49.319363   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:49 GMT
	I1226 22:04:49.319368   99116 round_trippers.go:580]     Audit-Id: 3c1e118a-8852-4e09-b306-3b83aedff6c7
	I1226 22:04:49.319388   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:49.319394   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:49.319400   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:49.319414   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:49.319518   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:49.816859   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:49.816878   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:49.816885   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:49.816891   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:49.819015   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:49.819041   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:49.819052   99116 round_trippers.go:580]     Audit-Id: 00221c96-5188-4bcd-afac-9e26e67199cc
	I1226 22:04:49.819072   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:49.819084   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:49.819096   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:49.819107   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:49.819121   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:49 GMT
	I1226 22:04:49.819228   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:50.316863   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:50.316886   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:50.316895   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:50.316900   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:50.319124   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:50.319146   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:50.319156   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:50.319163   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:50 GMT
	I1226 22:04:50.319170   99116 round_trippers.go:580]     Audit-Id: bbcc4eac-9955-4888-8751-56b8fce82d07
	I1226 22:04:50.319178   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:50.319188   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:50.319197   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:50.319310   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:50.816841   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:50.816861   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:50.816869   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:50.816875   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:50.819099   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:50.819123   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:50.819134   99116 round_trippers.go:580]     Audit-Id: b4c38865-e57e-4a5c-b981-f1a27bcf129e
	I1226 22:04:50.819142   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:50.819150   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:50.819163   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:50.819174   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:50.819184   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:50 GMT
	I1226 22:04:50.819289   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:50.819717   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:04:51.316846   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:51.316865   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:51.316873   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:51.316879   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:51.319254   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:51.319278   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:51.319288   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:51.319297   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:51.319316   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:51 GMT
	I1226 22:04:51.319324   99116 round_trippers.go:580]     Audit-Id: 7634ea6e-9f5d-407a-8f59-c4fb583215d8
	I1226 22:04:51.319337   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:51.319346   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:51.319469   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:51.816207   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:51.816228   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:51.816235   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:51.816246   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:51.818547   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:51.818569   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:51.818579   99116 round_trippers.go:580]     Audit-Id: 834b6ba3-23bb-49f5-ab52-1fba0d742f81
	I1226 22:04:51.818588   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:51.818595   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:51.818602   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:51.818613   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:51.818622   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:51 GMT
	I1226 22:04:51.818767   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:52.316322   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:52.316346   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:52.316359   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:52.316365   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:52.318574   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:52.318596   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:52.318605   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:52.318613   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:52 GMT
	I1226 22:04:52.318620   99116 round_trippers.go:580]     Audit-Id: f9ba45cb-b3a0-469b-a10e-3910d34bec48
	I1226 22:04:52.318627   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:52.318638   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:52.318672   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:52.318812   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:52.816369   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:52.816390   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:52.816398   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:52.816404   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:52.818501   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:52.818525   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:52.818534   99116 round_trippers.go:580]     Audit-Id: 0f62344e-aac9-4b10-9531-f25133f8a39c
	I1226 22:04:52.818540   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:52.818545   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:52.818551   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:52.818556   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:52.818561   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:52 GMT
	I1226 22:04:52.818675   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:53.316277   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:53.316305   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:53.316318   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:53.316332   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:53.318642   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:53.318677   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:53.318689   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:53 GMT
	I1226 22:04:53.318702   99116 round_trippers.go:580]     Audit-Id: 9dfb7652-d60d-492b-abdf-0c8cf00e9901
	I1226 22:04:53.318711   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:53.318719   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:53.318725   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:53.318732   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:53.318912   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:53.319230   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:04:53.816573   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:53.816592   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:53.816600   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:53.816606   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:53.819133   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:53.819155   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:53.819162   99116 round_trippers.go:580]     Audit-Id: bcf6bed1-0a01-4a4e-a13a-e8636dd7a031
	I1226 22:04:53.819168   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:53.819174   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:53.819179   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:53.819184   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:53.819189   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:53 GMT
	I1226 22:04:53.819301   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:54.316912   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:54.316933   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:54.316941   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:54.316948   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:54.319280   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:54.319304   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:54.319311   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:54 GMT
	I1226 22:04:54.319317   99116 round_trippers.go:580]     Audit-Id: 236dbda9-469e-44a6-b63a-c25dfb0ff1b8
	I1226 22:04:54.319322   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:54.319328   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:54.319333   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:54.319338   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:54.319444   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:54.816040   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:54.816062   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:54.816070   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:54.816076   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:54.818415   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:54.818439   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:54.818446   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:54.818451   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:54.818456   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:54.818464   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:54.818470   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:54 GMT
	I1226 22:04:54.818478   99116 round_trippers.go:580]     Audit-Id: 84c3fe7c-58d6-46db-9986-ede39f482088
	I1226 22:04:54.818709   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:55.316203   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:55.316233   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:55.316241   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:55.316247   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:55.318406   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:55.318430   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:55.318440   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:55.318449   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:55.318456   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:55 GMT
	I1226 22:04:55.318464   99116 round_trippers.go:580]     Audit-Id: fcc4a03c-3bb4-4c9f-8a1e-0cf8a7cbe9ae
	I1226 22:04:55.318472   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:55.318482   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:55.318624   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"485","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I1226 22:04:55.816057   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:55.816080   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:55.816088   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:55.816097   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:55.818251   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:55.818273   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:55.818280   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:55.818286   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:55 GMT
	I1226 22:04:55.818293   99116 round_trippers.go:580]     Audit-Id: dbb1ff74-7826-4cad-8283-1799081656bb
	I1226 22:04:55.818301   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:55.818308   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:55.818320   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:55.818440   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:04:55.818822   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:04:56.316082   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:56.316109   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:56.316127   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:56.316134   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:56.318363   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:56.318379   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:56.318386   99116 round_trippers.go:580]     Audit-Id: 570553d9-ac96-438c-8828-fa8f25d070a2
	I1226 22:04:56.318392   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:56.318398   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:56.318405   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:56.318413   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:56.318423   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:56 GMT
	I1226 22:04:56.318566   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:04:56.816660   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:56.816687   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:56.816699   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:56.816706   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:56.818731   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:56.818751   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:56.818761   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:56 GMT
	I1226 22:04:56.818770   99116 round_trippers.go:580]     Audit-Id: 526ac9d9-2b23-490b-9950-85eb324d0ce9
	I1226 22:04:56.818778   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:56.818786   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:56.818799   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:56.818810   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:56.818934   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:04:57.316711   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:57.316731   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:57.316739   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:57.316745   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:57.318873   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:57.318893   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:57.318900   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:57.318906   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:57.318912   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:57.318917   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:57 GMT
	I1226 22:04:57.318923   99116 round_trippers.go:580]     Audit-Id: 7a205ebe-093c-482e-97e2-98bc73676a5e
	I1226 22:04:57.318928   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:57.319098   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:04:57.816764   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:57.816786   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:57.816803   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:57.816811   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:57.819071   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:57.819102   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:57.819113   99116 round_trippers.go:580]     Audit-Id: a25da846-ee0f-4d83-910c-0bf2686175d7
	I1226 22:04:57.819125   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:57.819134   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:57.819144   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:57.819158   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:57.819173   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:57 GMT
	I1226 22:04:57.819329   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:04:57.819667   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:04:58.316847   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:58.316866   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:58.316874   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:58.316880   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:58.319156   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:58.319177   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:58.319186   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:58 GMT
	I1226 22:04:58.319193   99116 round_trippers.go:580]     Audit-Id: adbeff77-0479-4230-aee4-b04acab6bf71
	I1226 22:04:58.319200   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:58.319208   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:58.319216   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:58.319224   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:58.319347   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:04:58.815901   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:58.815921   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:58.815929   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:58.815940   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:58.818236   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:58.818250   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:58.818257   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:58.818262   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:58 GMT
	I1226 22:04:58.818267   99116 round_trippers.go:580]     Audit-Id: dd83c062-2a12-4676-9cd8-66628db37873
	I1226 22:04:58.818273   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:58.818278   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:58.818285   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:58.818436   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:04:59.315976   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:59.315999   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:59.316007   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:59.316013   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:59.318451   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:59.318485   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:59.318495   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:59.318503   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:59 GMT
	I1226 22:04:59.318511   99116 round_trippers.go:580]     Audit-Id: 429e1f43-d912-4991-838a-5da7a7b6d08f
	I1226 22:04:59.318520   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:59.318532   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:59.318588   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:59.318731   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:04:59.816200   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:04:59.816232   99116 round_trippers.go:469] Request Headers:
	I1226 22:04:59.816240   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:04:59.816246   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:04:59.818453   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:04:59.818483   99116 round_trippers.go:577] Response Headers:
	I1226 22:04:59.818491   99116 round_trippers.go:580]     Audit-Id: 0f9bec5e-cb55-41a7-9df3-4bf3eb9700a9
	I1226 22:04:59.818501   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:04:59.818510   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:04:59.818517   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:04:59.818527   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:04:59.818536   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:04:59 GMT
	I1226 22:04:59.818700   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:00.316212   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:00.316232   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:00.316241   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:00.316247   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:00.318373   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:00.318395   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:00.318404   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:00.318413   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:00 GMT
	I1226 22:05:00.318422   99116 round_trippers.go:580]     Audit-Id: b295cae8-1733-4be1-9e77-146320529eed
	I1226 22:05:00.318431   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:00.318437   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:00.318442   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:00.318574   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:00.318901   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:05:00.816088   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:00.816109   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:00.816117   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:00.816124   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:00.818236   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:00.818259   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:00.818269   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:00.818279   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:00.818296   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:00 GMT
	I1226 22:05:00.818307   99116 round_trippers.go:580]     Audit-Id: 9b558e8d-24ad-45f5-bbcb-16749b7d6e65
	I1226 22:05:00.818313   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:00.818319   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:00.818463   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:01.316035   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:01.316059   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:01.316070   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:01.316078   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:01.318308   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:01.318327   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:01.318333   99116 round_trippers.go:580]     Audit-Id: 5a1f1c2a-5fd5-493a-a817-82bc44041c19
	I1226 22:05:01.318339   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:01.318347   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:01.318354   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:01.318362   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:01.318392   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:01 GMT
	I1226 22:05:01.318517   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:01.816002   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:01.816025   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:01.816034   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:01.816040   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:01.818450   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:01.818476   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:01.818487   99116 round_trippers.go:580]     Audit-Id: cef721bd-2be7-4a2c-b97b-886bf8f1132e
	I1226 22:05:01.818494   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:01.818500   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:01.818505   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:01.818511   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:01.818519   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:01 GMT
	I1226 22:05:01.818623   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:02.316097   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:02.316120   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:02.316128   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:02.316134   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:02.318306   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:02.318331   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:02.318343   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:02 GMT
	I1226 22:05:02.318350   99116 round_trippers.go:580]     Audit-Id: bfe1341d-9927-49a1-9c65-eba5198d2e92
	I1226 22:05:02.318355   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:02.318360   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:02.318368   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:02.318377   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:02.318504   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:02.815990   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:02.816015   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:02.816023   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:02.816029   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:02.818336   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:02.818383   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:02.818391   99116 round_trippers.go:580]     Audit-Id: 3d897b26-bbca-44ce-95a6-26226b9a1fb6
	I1226 22:05:02.818400   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:02.818411   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:02.818422   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:02.818433   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:02.818445   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:02 GMT
	I1226 22:05:02.818543   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:02.818844   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:05:03.315993   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:03.316013   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:03.316023   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:03.316031   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:03.318319   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:03.318343   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:03.318352   99116 round_trippers.go:580]     Audit-Id: a76cfad7-61f4-486c-81c8-04f607768eb8
	I1226 22:05:03.318360   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:03.318367   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:03.318374   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:03.318383   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:03.318396   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:03 GMT
	I1226 22:05:03.318542   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:03.816137   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:03.816159   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:03.816170   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:03.816181   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:03.818425   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:03.818443   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:03.818449   99116 round_trippers.go:580]     Audit-Id: f9dff14f-8f66-4f1e-99b3-a9174646c631
	I1226 22:05:03.818455   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:03.818460   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:03.818466   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:03.818471   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:03.818489   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:03 GMT
	I1226 22:05:03.818644   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:04.316202   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:04.316224   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:04.316231   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:04.316237   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:04.318468   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:04.318491   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:04.318497   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:04.318504   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:04.318509   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:04 GMT
	I1226 22:05:04.318514   99116 round_trippers.go:580]     Audit-Id: fe6bf87f-46e8-4abf-8281-221d626b55b8
	I1226 22:05:04.318519   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:04.318524   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:04.318635   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:04.816200   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:04.816234   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:04.816247   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:04.816257   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:04.818548   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:04.818567   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:04.818573   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:04 GMT
	I1226 22:05:04.818581   99116 round_trippers.go:580]     Audit-Id: 04f8046d-3e18-439d-9e5d-02308c3b81b2
	I1226 22:05:04.818586   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:04.818591   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:04.818596   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:04.818602   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:04.818802   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:04.819117   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:05:05.316300   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:05.316325   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:05.316337   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:05.316347   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:05.318504   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:05.318522   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:05.318529   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:05.318534   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:05.318539   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:05 GMT
	I1226 22:05:05.318544   99116 round_trippers.go:580]     Audit-Id: 276e61fa-b2e3-486c-90f2-82fc0592d2c4
	I1226 22:05:05.318549   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:05.318554   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:05.318700   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:05.815998   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:05.816017   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:05.816025   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:05.816031   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:05.818226   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:05.818244   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:05.818250   99116 round_trippers.go:580]     Audit-Id: 92d73672-e3a5-433e-aa46-f1ab6e71b0ab
	I1226 22:05:05.818256   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:05.818266   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:05.818276   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:05.818286   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:05.818295   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:05 GMT
	I1226 22:05:05.818410   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:06.316069   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:06.316092   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:06.316103   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:06.316112   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:06.318249   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:06.318272   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:06.318281   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:06.318289   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:06.318298   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:06.318315   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:06 GMT
	I1226 22:05:06.318322   99116 round_trippers.go:580]     Audit-Id: ecf9874d-6a25-4ef0-906b-2f991bb10858
	I1226 22:05:06.318327   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:06.318426   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:06.816500   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:06.816524   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:06.816544   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:06.816551   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:06.818761   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:06.818784   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:06.818795   99116 round_trippers.go:580]     Audit-Id: 9db6a872-b7e9-4e32-8fa4-358ed5620cf8
	I1226 22:05:06.818806   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:06.818819   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:06.818830   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:06.818841   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:06.818847   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:06 GMT
	I1226 22:05:06.818956   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:06.819263   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:05:07.316534   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:07.316555   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:07.316563   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:07.316569   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:07.318768   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:07.318788   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:07.318794   99116 round_trippers.go:580]     Audit-Id: 3fe83b34-fbbe-4205-8d41-2a6875847a56
	I1226 22:05:07.318803   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:07.318812   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:07.318820   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:07.318830   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:07.318841   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:07 GMT
	I1226 22:05:07.318944   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:07.816560   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:07.816583   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:07.816594   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:07.816603   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:07.819016   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:07.819039   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:07.819050   99116 round_trippers.go:580]     Audit-Id: cc944042-0f1c-4e47-b009-fee663962a27
	I1226 22:05:07.819072   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:07.819083   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:07.819090   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:07.819098   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:07.819104   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:07 GMT
	I1226 22:05:07.819225   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:08.316806   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:08.316826   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:08.316834   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:08.316840   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:08.318977   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:08.318999   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:08.319009   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:08.319018   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:08.319026   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:08.319034   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:08 GMT
	I1226 22:05:08.319043   99116 round_trippers.go:580]     Audit-Id: 2fa0213e-0490-4c2a-a277-e177e9aa7a1c
	I1226 22:05:08.319053   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:08.319170   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:08.816722   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:08.816747   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:08.816759   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:08.816769   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:08.819036   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:08.819057   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:08.819078   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:08.819086   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:08 GMT
	I1226 22:05:08.819093   99116 round_trippers.go:580]     Audit-Id: 54bc5375-c5ab-4a8c-b61b-c14faa371604
	I1226 22:05:08.819102   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:08.819126   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:08.819139   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:08.819243   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:08.819579   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:05:09.316880   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:09.316903   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:09.316911   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:09.316917   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:09.319290   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:09.319312   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:09.319322   99116 round_trippers.go:580]     Audit-Id: 9bc0ae93-c1eb-49dd-ac40-f6135e4d3685
	I1226 22:05:09.319330   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:09.319337   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:09.319346   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:09.319355   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:09.319365   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:09 GMT
	I1226 22:05:09.319480   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:09.816091   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:09.816109   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:09.816117   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:09.816123   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:09.818133   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:05:09.818152   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:09.818158   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:09.818164   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:09.818169   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:09.818174   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:09.818180   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:09 GMT
	I1226 22:05:09.818187   99116 round_trippers.go:580]     Audit-Id: 440a15fc-8b24-4e4e-9283-8da2cf498037
	I1226 22:05:09.818326   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:10.315896   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:10.315920   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:10.315930   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:10.315938   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:10.318306   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:10.318329   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:10.318338   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:10.318345   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:10 GMT
	I1226 22:05:10.318353   99116 round_trippers.go:580]     Audit-Id: 2e8de285-ceb7-4806-aa61-a33a83339a9f
	I1226 22:05:10.318360   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:10.318370   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:10.318382   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:10.318531   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:10.816001   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:10.816023   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:10.816030   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:10.816036   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:10.818280   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:10.818300   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:10.818309   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:10.818317   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:10.818325   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:10.818333   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:10.818341   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:10 GMT
	I1226 22:05:10.818353   99116 round_trippers.go:580]     Audit-Id: 20636ba1-2e41-4a2b-9e18-3d70ed5ef2d6
	I1226 22:05:10.818501   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:11.316053   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:11.316081   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:11.316091   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:11.316099   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:11.318449   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:11.318470   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:11.318476   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:11.318482   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:11.318487   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:11.318492   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:11 GMT
	I1226 22:05:11.318498   99116 round_trippers.go:580]     Audit-Id: e32750f5-0a56-4a5d-8805-750a57dabc1c
	I1226 22:05:11.318506   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:11.318649   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:11.319081   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:05:11.816342   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:11.816386   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:11.816396   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:11.816402   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:11.818562   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:11.818581   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:11.818590   99116 round_trippers.go:580]     Audit-Id: b91e2256-4aa7-439b-a570-9368f48e6aea
	I1226 22:05:11.818598   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:11.818605   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:11.818612   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:11.818621   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:11.818633   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:11 GMT
	I1226 22:05:11.818786   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:12.316349   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:12.316374   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:12.316382   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:12.316388   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:12.318633   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:12.318678   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:12.318689   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:12.318698   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:12.318711   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:12.318722   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:12 GMT
	I1226 22:05:12.318734   99116 round_trippers.go:580]     Audit-Id: b018e1c3-1bae-43a8-9cff-e4d5f21b78a7
	I1226 22:05:12.318743   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:12.318854   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:12.816404   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:12.816425   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:12.816433   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:12.816439   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:12.818693   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:12.818718   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:12.818728   99116 round_trippers.go:580]     Audit-Id: ce74675d-fc5f-4cc6-ad1e-06048b9467a8
	I1226 22:05:12.818736   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:12.818741   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:12.818749   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:12.818754   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:12.818759   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:12 GMT
	I1226 22:05:12.818885   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:13.316552   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:13.316579   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:13.316591   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:13.316600   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:13.318642   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:13.318676   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:13.318684   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:13.318689   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:13.318695   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:13.318700   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:13.318707   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:13 GMT
	I1226 22:05:13.318716   99116 round_trippers.go:580]     Audit-Id: 6569e45e-6526-403e-a6ae-f7c84d9fdf00
	I1226 22:05:13.318875   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:13.319172   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:05:13.816411   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:13.816431   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:13.816438   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:13.816444   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:13.818402   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:05:13.818436   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:13.818446   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:13.818456   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:13.818466   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:13.818472   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:13.818486   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:13 GMT
	I1226 22:05:13.818498   99116 round_trippers.go:580]     Audit-Id: baf1c07b-65de-4931-8bb3-3bb54744221a
	I1226 22:05:13.818596   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:14.316164   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:14.316191   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:14.316199   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:14.316205   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:14.318555   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:14.318576   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:14.318585   99116 round_trippers.go:580]     Audit-Id: 39a006fa-429f-4912-a578-0134ef6187f0
	I1226 22:05:14.318595   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:14.318603   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:14.318611   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:14.318618   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:14.318626   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:14 GMT
	I1226 22:05:14.318744   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:14.816024   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:14.816045   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:14.816053   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:14.816067   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:14.818122   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:14.818141   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:14.818147   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:14.818153   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:14.818158   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:14 GMT
	I1226 22:05:14.818163   99116 round_trippers.go:580]     Audit-Id: b7b1e8cd-f0e3-4b12-bc5c-5e7327c181d5
	I1226 22:05:14.818168   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:14.818173   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:14.818297   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:15.316949   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:15.316971   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:15.316979   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:15.316985   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:15.319217   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:15.319243   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:15.319252   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:15.319260   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:15.319268   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:15 GMT
	I1226 22:05:15.319276   99116 round_trippers.go:580]     Audit-Id: c94a27da-ba39-453b-ad5f-e0a7923d6fe7
	I1226 22:05:15.319283   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:15.319291   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:15.319396   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:15.319700   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:05:15.815932   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:15.815951   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:15.815959   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:15.815965   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:15.818135   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:15.818158   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:15.818167   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:15.818177   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:15 GMT
	I1226 22:05:15.818186   99116 round_trippers.go:580]     Audit-Id: 6984e1b7-efa4-4ab7-9275-fcb8711d49b6
	I1226 22:05:15.818194   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:15.818201   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:15.818206   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:15.818302   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:16.316898   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:16.316920   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:16.316927   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:16.316933   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:16.319536   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:16.319554   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:16.319565   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:16 GMT
	I1226 22:05:16.319571   99116 round_trippers.go:580]     Audit-Id: 6901748b-2c23-42c6-b9d2-0397d38bf383
	I1226 22:05:16.319576   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:16.319581   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:16.319586   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:16.319591   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:16.319691   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:16.816614   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:16.816632   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:16.816639   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:16.816645   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:16.818901   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:16.818935   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:16.818945   99116 round_trippers.go:580]     Audit-Id: 238ec0cf-31dd-43d9-aed3-d8993a1c88f4
	I1226 22:05:16.818954   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:16.818963   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:16.818979   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:16.818992   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:16.819004   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:16 GMT
	I1226 22:05:16.819133   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:17.316754   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:17.316775   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:17.316783   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:17.316789   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:17.318819   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:17.318842   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:17.318851   99116 round_trippers.go:580]     Audit-Id: 5adf6e2e-6060-4b5b-84c9-585b96f7fb3a
	I1226 22:05:17.318859   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:17.318870   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:17.318880   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:17.318892   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:17.318899   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:17 GMT
	I1226 22:05:17.319024   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:17.816632   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:17.816653   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:17.816661   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:17.816667   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:17.818765   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:17.818790   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:17.818800   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:17.818810   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:17 GMT
	I1226 22:05:17.818818   99116 round_trippers.go:580]     Audit-Id: 01a78e64-2f2c-4aea-bc53-8db3b33658c6
	I1226 22:05:17.818827   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:17.818834   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:17.818840   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:17.818958   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:17.819241   99116 node_ready.go:58] node "multinode-266826-m02" has status "Ready":"False"
	I1226 22:05:18.316610   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:18.316630   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:18.316638   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:18.316644   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:18.318825   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:18.318855   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:18.318866   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:18.318874   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:18.318882   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:18.318890   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:18.318901   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:18 GMT
	I1226 22:05:18.318910   99116 round_trippers.go:580]     Audit-Id: ee16151f-1e1b-4077-891f-3f8bbd135b2d
	I1226 22:05:18.319024   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:18.816680   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:18.816704   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:18.816712   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:18.816718   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:18.819003   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:18.819032   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:18.819044   99116 round_trippers.go:580]     Audit-Id: bdbce5eb-c566-4cc3-b257-001f4e124c33
	I1226 22:05:18.819054   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:18.819063   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:18.819077   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:18.819089   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:18.819103   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:18 GMT
	I1226 22:05:18.819229   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"497","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6031 chars]
	I1226 22:05:19.316793   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:19.316817   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.316825   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.316831   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.319300   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:19.319325   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.319336   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.319346   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.319370   99116 round_trippers.go:580]     Audit-Id: 04f9a7f6-f72f-42ea-a003-9eeb61049767
	I1226 22:05:19.319379   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.319384   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.319391   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.319538   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"520","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I1226 22:05:19.319848   99116 node_ready.go:49] node "multinode-266826-m02" has status "Ready":"True"
	I1226 22:05:19.319865   99116 node_ready.go:38] duration metric: took 33.004092952s waiting for node "multinode-266826-m02" to be "Ready" ...
	I1226 22:05:19.319875   99116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:05:19.319932   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:05:19.319939   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.319946   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.319954   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.322952   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:19.322979   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.322991   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.322999   99116 round_trippers.go:580]     Audit-Id: 92e51f48-4da5-46d4-8f6c-27d5d4138746
	I1226 22:05:19.323007   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.323015   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.323025   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.323036   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.323541   99116 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"521"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4p457","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1082508c-2f95-46f0-8ec7-f530272863d8","resourceVersion":"427","creationTimestamp":"2023-12-26T22:04:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ced70fb3-5964-411a-9a71-77cadfafa3cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ced70fb3-5964-411a-9a71-77cadfafa3cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1226 22:05:19.325604   99116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4p457" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.325677   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4p457
	I1226 22:05:19.325686   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.325693   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.325700   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.327525   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:05:19.327540   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.327547   99116 round_trippers.go:580]     Audit-Id: 6b9937fd-812e-4221-b7fa-201060ba2ba9
	I1226 22:05:19.327552   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.327557   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.327562   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.327567   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.327573   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.327671   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4p457","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1082508c-2f95-46f0-8ec7-f530272863d8","resourceVersion":"427","creationTimestamp":"2023-12-26T22:04:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ced70fb3-5964-411a-9a71-77cadfafa3cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ced70fb3-5964-411a-9a71-77cadfafa3cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1226 22:05:19.328058   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:05:19.328068   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.328085   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.328093   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.329745   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:05:19.329759   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.329767   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.329775   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.329783   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.329793   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.329804   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.329813   99116 round_trippers.go:580]     Audit-Id: 21955c2f-d78e-4dfe-b0dd-b23b9cfb38af
	I1226 22:05:19.329940   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:05:19.330224   99116 pod_ready.go:92] pod "coredns-5dd5756b68-4p457" in "kube-system" namespace has status "Ready":"True"
	I1226 22:05:19.330238   99116 pod_ready.go:81] duration metric: took 4.612736ms waiting for pod "coredns-5dd5756b68-4p457" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.330246   99116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.330287   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-266826
	I1226 22:05:19.330295   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.330301   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.330307   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.331898   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:05:19.331920   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.331930   99116 round_trippers.go:580]     Audit-Id: 5c6dd2ee-98fc-4630-b3a4-efa6607a4981
	I1226 22:05:19.331941   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.331954   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.331966   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.331980   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.331989   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.332077   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-266826","namespace":"kube-system","uid":"292e9393-2f69-4242-a157-b140c190d193","resourceVersion":"328","creationTimestamp":"2023-12-26T22:04:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2fb45fcc87f242a4c992596764e6dc2d","kubernetes.io/config.mirror":"2fb45fcc87f242a4c992596764e6dc2d","kubernetes.io/config.seen":"2023-12-26T22:04:15.904879811Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1226 22:05:19.332395   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:05:19.332412   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.332419   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.332426   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.334053   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:05:19.334072   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.334082   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.334091   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.334098   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.334103   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.334108   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.334113   99116 round_trippers.go:580]     Audit-Id: 8ddb032e-0b86-4db6-86e5-7c6ecb528a2d
	I1226 22:05:19.334199   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:05:19.334462   99116 pod_ready.go:92] pod "etcd-multinode-266826" in "kube-system" namespace has status "Ready":"True"
	I1226 22:05:19.334476   99116 pod_ready.go:81] duration metric: took 4.224458ms waiting for pod "etcd-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.334488   99116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.334528   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-266826
	I1226 22:05:19.334535   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.334550   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.334558   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.336074   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:05:19.336089   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.336095   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.336100   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.336105   99116 round_trippers.go:580]     Audit-Id: 73fecc22-bf9a-4340-9c95-4a4ed9b15cfd
	I1226 22:05:19.336110   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.336115   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.336121   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.336247   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-266826","namespace":"kube-system","uid":"60235785-d057-4077-9fc2-eacc2fe9a891","resourceVersion":"308","creationTimestamp":"2023-12-26T22:04:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"921fd19a66a4b2c6fdcfeaed7f1b0d77","kubernetes.io/config.mirror":"921fd19a66a4b2c6fdcfeaed7f1b0d77","kubernetes.io/config.seen":"2023-12-26T22:04:15.904875073Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1226 22:05:19.336596   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:05:19.336607   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.336613   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.336619   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.338118   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:05:19.338137   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.338146   99116 round_trippers.go:580]     Audit-Id: 3d26f1d0-37cd-4746-8503-e2321c4d456d
	I1226 22:05:19.338154   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.338166   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.338174   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.338182   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.338191   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.338303   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:05:19.338586   99116 pod_ready.go:92] pod "kube-apiserver-multinode-266826" in "kube-system" namespace has status "Ready":"True"
	I1226 22:05:19.338600   99116 pod_ready.go:81] duration metric: took 4.104213ms waiting for pod "kube-apiserver-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.338608   99116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.338647   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-266826
	I1226 22:05:19.338666   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.338676   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.338688   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.340211   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:05:19.340230   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.340240   99116 round_trippers.go:580]     Audit-Id: 421a55fd-86cf-4374-a285-e36a1d10dbc7
	I1226 22:05:19.340249   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.340257   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.340265   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.340276   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.340288   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.340409   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-266826","namespace":"kube-system","uid":"fce43ca4-b581-4e63-9d91-407cfc3af34a","resourceVersion":"315","creationTimestamp":"2023-12-26T22:04:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b99dab39183e7b4071478b37393b1479","kubernetes.io/config.mirror":"b99dab39183e7b4071478b37393b1479","kubernetes.io/config.seen":"2023-12-26T22:04:15.904878245Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1226 22:05:19.340773   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:05:19.340786   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.340793   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.340800   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.342296   99116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:05:19.342313   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.342320   99116 round_trippers.go:580]     Audit-Id: a63d72b9-016d-484f-ba9d-f9f85a90e03b
	I1226 22:05:19.342328   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.342337   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.342346   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.342356   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.342365   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.342489   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:05:19.342812   99116 pod_ready.go:92] pod "kube-controller-manager-multinode-266826" in "kube-system" namespace has status "Ready":"True"
	I1226 22:05:19.342827   99116 pod_ready.go:81] duration metric: took 4.213703ms waiting for pod "kube-controller-manager-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.342840   99116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7fj8c" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.517897   99116 request.go:629] Waited for 174.971185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fj8c
	I1226 22:05:19.517967   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fj8c
	I1226 22:05:19.517978   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.517989   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.518004   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.520534   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:19.520559   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.520567   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.520572   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.520577   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.520583   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.520588   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.520595   99116 round_trippers.go:580]     Audit-Id: 879d6005-b110-4cad-8995-2f400218d8b2
	I1226 22:05:19.520710   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7fj8c","generateName":"kube-proxy-","namespace":"kube-system","uid":"e5f60db1-5cb9-42ba-bff6-ec6f769e2939","resourceVersion":"489","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5e3e2447-719f-4fc4-8238-6f824bc5e757","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e3e2447-719f-4fc4-8238-6f824bc5e757\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1226 22:05:19.717495   99116 request.go:629] Waited for 196.391345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:19.717551   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826-m02
	I1226 22:05:19.717555   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.717562   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.717568   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.719823   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:19.719849   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.719860   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.719869   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.719879   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.719888   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.719897   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.719914   99116 round_trippers.go:580]     Audit-Id: 7b6bf2af-201e-45f7-b2d8-3856e83ef86b
	I1226 22:05:19.720016   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826-m02","uid":"989df1f5-d01b-49c1-9c06-50c4a5bb280e","resourceVersion":"520","creationTimestamp":"2023-12-26T22:04:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_04_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I1226 22:05:19.720307   99116 pod_ready.go:92] pod "kube-proxy-7fj8c" in "kube-system" namespace has status "Ready":"True"
	I1226 22:05:19.720322   99116 pod_ready.go:81] duration metric: took 377.47066ms waiting for pod "kube-proxy-7fj8c" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.720330   99116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-frq75" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:19.917590   99116 request.go:629] Waited for 197.193985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-frq75
	I1226 22:05:19.917661   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-frq75
	I1226 22:05:19.917669   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:19.917677   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:19.917685   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:19.920001   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:19.920025   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:19.920033   99116 round_trippers.go:580]     Audit-Id: d6948f50-a9ba-44b3-b20c-a1faeb45b949
	I1226 22:05:19.920042   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:19.920051   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:19.920064   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:19.920074   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:19.920087   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:19 GMT
	I1226 22:05:19.920205   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-frq75","generateName":"kube-proxy-","namespace":"kube-system","uid":"e47e4ce1-94e6-4f54-8ce4-717af6ef6e4b","resourceVersion":"408","creationTimestamp":"2023-12-26T22:04:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5e3e2447-719f-4fc4-8238-6f824bc5e757","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e3e2447-719f-4fc4-8238-6f824bc5e757\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1226 22:05:20.116924   99116 request.go:629] Waited for 196.307454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:05:20.116997   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:05:20.117002   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:20.117009   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:20.117017   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:20.119304   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:20.119334   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:20.119344   99116 round_trippers.go:580]     Audit-Id: 66bd8e20-f32f-4394-8aa7-3c8715f64fac
	I1226 22:05:20.119353   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:20.119362   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:20.119371   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:20.119383   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:20.119421   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:20 GMT
	I1226 22:05:20.119568   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:05:20.119852   99116 pod_ready.go:92] pod "kube-proxy-frq75" in "kube-system" namespace has status "Ready":"True"
	I1226 22:05:20.119867   99116 pod_ready.go:81] duration metric: took 399.531137ms waiting for pod "kube-proxy-frq75" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:20.119881   99116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:20.316814   99116 request.go:629] Waited for 196.858348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266826
	I1226 22:05:20.316874   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266826
	I1226 22:05:20.316879   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:20.316889   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:20.316901   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:20.319308   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:20.319330   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:20.319341   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:20 GMT
	I1226 22:05:20.319348   99116 round_trippers.go:580]     Audit-Id: b3b58260-405e-4a2e-a90c-c81c6989bef6
	I1226 22:05:20.319360   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:20.319371   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:20.319379   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:20.319390   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:20.319507   99116 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-266826","namespace":"kube-system","uid":"af177936-2de6-4220-8dd7-76e070b19ea2","resourceVersion":"289","creationTimestamp":"2023-12-26T22:04:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3f9bfd187d50cafa9c506d2c393e2576","kubernetes.io/config.mirror":"3f9bfd187d50cafa9c506d2c393e2576","kubernetes.io/config.seen":"2023-12-26T22:04:09.928231295Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:04:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1226 22:05:20.517269   99116 request.go:629] Waited for 197.392658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:05:20.517323   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-266826
	I1226 22:05:20.517328   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:20.517335   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:20.517348   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:20.519698   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:20.519717   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:20.519724   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:20.519730   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:20.519735   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:20.519740   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:20.519746   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:20 GMT
	I1226 22:05:20.519750   99116 round_trippers.go:580]     Audit-Id: b3a4a46c-fee9-4768-87fc-c714c2f336b8
	I1226 22:05:20.519864   99116 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:04:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1226 22:05:20.520163   99116 pod_ready.go:92] pod "kube-scheduler-multinode-266826" in "kube-system" namespace has status "Ready":"True"
	I1226 22:05:20.520180   99116 pod_ready.go:81] duration metric: took 400.292132ms waiting for pod "kube-scheduler-multinode-266826" in "kube-system" namespace to be "Ready" ...
	I1226 22:05:20.520192   99116 pod_ready.go:38] duration metric: took 1.200306738s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:05:20.520207   99116 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 22:05:20.520248   99116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:05:20.530819   99116 system_svc.go:56] duration metric: took 10.602408ms WaitForService to wait for kubelet.
	I1226 22:05:20.530845   99116 kubeadm.go:581] duration metric: took 34.231319044s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 22:05:20.530895   99116 node_conditions.go:102] verifying NodePressure condition ...
	I1226 22:05:20.717401   99116 request.go:629] Waited for 186.386814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1226 22:05:20.717451   99116 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1226 22:05:20.717456   99116 round_trippers.go:469] Request Headers:
	I1226 22:05:20.717463   99116 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:05:20.717469   99116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1226 22:05:20.719865   99116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:05:20.719887   99116 round_trippers.go:577] Response Headers:
	I1226 22:05:20.719895   99116 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:05:20 GMT
	I1226 22:05:20.719903   99116 round_trippers.go:580]     Audit-Id: 0215f096-37c2-47e4-bcec-b1d5eed038cc
	I1226 22:05:20.719911   99116 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:05:20.719919   99116 round_trippers.go:580]     Content-Type: application/json
	I1226 22:05:20.719928   99116 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9923a133-7910-4bf2-987f-98688b322d3a
	I1226 22:05:20.719936   99116 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2d4ddd71-7070-47fb-bebe-32d13b9800e9
	I1226 22:05:20.720142   99116 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"521"},"items":[{"metadata":{"name":"multinode-266826","uid":"4bcc5d86-6a9a-4421-9bf9-7c089930ab14","resourceVersion":"414","creationTimestamp":"2023-12-26T22:04:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266826","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-266826","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_04_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12840 chars]
	I1226 22:05:20.720643   99116 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1226 22:05:20.720662   99116 node_conditions.go:123] node cpu capacity is 8
	I1226 22:05:20.720675   99116 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1226 22:05:20.720681   99116 node_conditions.go:123] node cpu capacity is 8
	I1226 22:05:20.720687   99116 node_conditions.go:105] duration metric: took 189.786063ms to run NodePressure ...
	I1226 22:05:20.720701   99116 start.go:228] waiting for startup goroutines ...
	I1226 22:05:20.720736   99116 start.go:242] writing updated cluster config ...
	I1226 22:05:20.721016   99116 ssh_runner.go:195] Run: rm -f paused
	I1226 22:05:20.766368   99116 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1226 22:05:20.769466   99116 out.go:177] * Done! kubectl is now configured to use "multinode-266826" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 26 22:04:32 multinode-266826 crio[961]: time="2023-12-26 22:04:32.620222369Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cd8c830c278dc4764f05912cf8f283d61746a17f10e608f04e808c4e713a9e1c/merged/etc/passwd: no such file or directory"
	Dec 26 22:04:32 multinode-266826 crio[961]: time="2023-12-26 22:04:32.620252277Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cd8c830c278dc4764f05912cf8f283d61746a17f10e608f04e808c4e713a9e1c/merged/etc/group: no such file or directory"
	Dec 26 22:04:32 multinode-266826 crio[961]: time="2023-12-26 22:04:32.651974889Z" level=info msg="Created container 7aeaf9c3df9661c8388386879a692029c240adac4afb0c71ae024287a92ec0eb: kube-system/storage-provisioner/storage-provisioner" id=1ab3204b-0313-47ea-8071-fca3c4eed84e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 26 22:04:32 multinode-266826 crio[961]: time="2023-12-26 22:04:32.652472216Z" level=info msg="Starting container: 7aeaf9c3df9661c8388386879a692029c240adac4afb0c71ae024287a92ec0eb" id=4fc24756-b9df-4019-979a-53836e14f657 name=/runtime.v1.RuntimeService/StartContainer
	Dec 26 22:04:32 multinode-266826 crio[961]: time="2023-12-26 22:04:32.658273784Z" level=info msg="Started container" PID=2381 containerID=7aeaf9c3df9661c8388386879a692029c240adac4afb0c71ae024287a92ec0eb description=kube-system/storage-provisioner/storage-provisioner id=4fc24756-b9df-4019-979a-53836e14f657 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f94e70841fddb1a6b4da68495f6419bb30afd37ed81cda9adef5191b892b8fc6
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.777206919Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-25lpb/POD" id=dc1800b5-ec0f-4221-a646-08e4de259429 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.777283917Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.790970490Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-25lpb Namespace:default ID:5b4d265478960a21083d2e9e018f741167ca56113dce3a7eed19d80e2916ee5d UID:d32c8d8a-2a0a-4531-8c45-067153054026 NetNS:/var/run/netns/6823367f-ba9b-4e6c-8184-7873b0b47499 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.791001422Z" level=info msg="Adding pod default_busybox-5bc68d56bd-25lpb to CNI network \"kindnet\" (type=ptp)"
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.798987427Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-25lpb Namespace:default ID:5b4d265478960a21083d2e9e018f741167ca56113dce3a7eed19d80e2916ee5d UID:d32c8d8a-2a0a-4531-8c45-067153054026 NetNS:/var/run/netns/6823367f-ba9b-4e6c-8184-7873b0b47499 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.799119130Z" level=info msg="Checking pod default_busybox-5bc68d56bd-25lpb for CNI network kindnet (type=ptp)"
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.834062161Z" level=info msg="Ran pod sandbox 5b4d265478960a21083d2e9e018f741167ca56113dce3a7eed19d80e2916ee5d with infra container: default/busybox-5bc68d56bd-25lpb/POD" id=dc1800b5-ec0f-4221-a646-08e4de259429 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.835301302Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=9c6b4a08-7391-4921-8b0b-ab5b626d77aa name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.835550958Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=9c6b4a08-7391-4921-8b0b-ab5b626d77aa name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.836335946Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=e2e87608-bb3d-46c4-9b73-da94596c5100 name=/runtime.v1.ImageService/PullImage
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.840033544Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 26 22:05:21 multinode-266826 crio[961]: time="2023-12-26 22:05:21.978243172Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 26 22:05:22 multinode-266826 crio[961]: time="2023-12-26 22:05:22.367703419Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=e2e87608-bb3d-46c4-9b73-da94596c5100 name=/runtime.v1.ImageService/PullImage
	Dec 26 22:05:22 multinode-266826 crio[961]: time="2023-12-26 22:05:22.368586641Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=d037ef17-8ca9-4f90-8358-9777ae35b2cf name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:05:22 multinode-266826 crio[961]: time="2023-12-26 22:05:22.369433938Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d037ef17-8ca9-4f90-8358-9777ae35b2cf name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:05:22 multinode-266826 crio[961]: time="2023-12-26 22:05:22.371035496Z" level=info msg="Creating container: default/busybox-5bc68d56bd-25lpb/busybox" id=2bf5cea3-293a-483b-8a45-8ab568d964dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 26 22:05:22 multinode-266826 crio[961]: time="2023-12-26 22:05:22.371127733Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 26 22:05:22 multinode-266826 crio[961]: time="2023-12-26 22:05:22.452662716Z" level=info msg="Created container 11f2a4aa0bb54906eddf09d6cfe57ef899b0cb9899f7fb4b39ae7868e22ee0e3: default/busybox-5bc68d56bd-25lpb/busybox" id=2bf5cea3-293a-483b-8a45-8ab568d964dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 26 22:05:22 multinode-266826 crio[961]: time="2023-12-26 22:05:22.453219496Z" level=info msg="Starting container: 11f2a4aa0bb54906eddf09d6cfe57ef899b0cb9899f7fb4b39ae7868e22ee0e3" id=4e92e508-9f04-49fc-9058-e8c5fbcd870e name=/runtime.v1.RuntimeService/StartContainer
	Dec 26 22:05:22 multinode-266826 crio[961]: time="2023-12-26 22:05:22.460071485Z" level=info msg="Started container" PID=2528 containerID=11f2a4aa0bb54906eddf09d6cfe57ef899b0cb9899f7fb4b39ae7868e22ee0e3 description=default/busybox-5bc68d56bd-25lpb/busybox id=4e92e508-9f04-49fc-9058-e8c5fbcd870e name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b4d265478960a21083d2e9e018f741167ca56113dce3a7eed19d80e2916ee5d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	11f2a4aa0bb54       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   5b4d265478960       busybox-5bc68d56bd-25lpb
	7aeaf9c3df966       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      53 seconds ago       Running             storage-provisioner       0                   f94e70841fddb       storage-provisioner
	2400db05674fe       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      54 seconds ago       Running             coredns                   0                   ca30f587646bb       coredns-5dd5756b68-4p457
	208a22ce3d13a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      55 seconds ago       Running             kube-proxy                0                   e2f704e845740       kube-proxy-frq75
	d02fd47012da9       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      55 seconds ago       Running             kindnet-cni               0                   2fb88aa7dd68c       kindnet-vfmsx
	6cc3600c88aa9       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   bbb0ba32fe155       kube-scheduler-multinode-266826
	ee377d38dbeaf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   0497d1e51e299       kube-controller-manager-multinode-266826
	989be8f823455       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   9a115159980d7       kube-apiserver-multinode-266826
	b8fa7d6be05a5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   0ae6d6f6215de       etcd-multinode-266826
	
	
	==> coredns [2400db05674fe194566a6aa3bbb5d21cb5a94ae69a786f1b068f05a79178d340] <==
	[INFO] 10.244.0.3:60221 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085404s
	[INFO] 10.244.1.2:46806 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124133s
	[INFO] 10.244.1.2:50541 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001591929s
	[INFO] 10.244.1.2:44672 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007316s
	[INFO] 10.244.1.2:49342 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088682s
	[INFO] 10.244.1.2:40268 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001080734s
	[INFO] 10.244.1.2:36106 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000046569s
	[INFO] 10.244.1.2:52010 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068272s
	[INFO] 10.244.1.2:43282 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066033s
	[INFO] 10.244.0.3:60516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093043s
	[INFO] 10.244.0.3:56832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008948s
	[INFO] 10.244.0.3:59406 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054776s
	[INFO] 10.244.0.3:35137 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050458s
	[INFO] 10.244.1.2:57216 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157606s
	[INFO] 10.244.1.2:49990 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088837s
	[INFO] 10.244.1.2:46035 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062602s
	[INFO] 10.244.1.2:47882 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071487s
	[INFO] 10.244.0.3:50303 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090108s
	[INFO] 10.244.0.3:60718 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155107s
	[INFO] 10.244.0.3:50572 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001131s
	[INFO] 10.244.0.3:49029 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106537s
	[INFO] 10.244.1.2:54475 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013469s
	[INFO] 10.244.1.2:48595 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075217s
	[INFO] 10.244.1.2:49583 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083867s
	[INFO] 10.244.1.2:40097 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000060777s
	
	
	==> describe nodes <==
	Name:               multinode-266826
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-266826
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=multinode-266826
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T22_04_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 22:04:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-266826
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 22:05:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 22:04:31 +0000   Tue, 26 Dec 2023 22:04:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 22:04:31 +0000   Tue, 26 Dec 2023 22:04:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 22:04:31 +0000   Tue, 26 Dec 2023 22:04:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 22:04:31 +0000   Tue, 26 Dec 2023 22:04:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-266826
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d2fa2d078cd486faff94bb9bed46ab6
	  System UUID:                ab56844f-c12a-4020-8af6-58eceed00e67
	  Boot ID:                    86db03b9-ef11-43ea-be40-040b33a40e54
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-25lpb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5dd5756b68-4p457                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     57s
	  kube-system                 etcd-multinode-266826                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 kindnet-vfmsx                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      57s
	  kube-system                 kube-apiserver-multinode-266826             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-multinode-266826    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-proxy-frq75                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-multinode-266826             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s (x8 over 77s)  kubelet          Node multinode-266826 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 77s)  kubelet          Node multinode-266826 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x8 over 77s)  kubelet          Node multinode-266826 status is now: NodeHasSufficientPID
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node multinode-266826 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node multinode-266826 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s                kubelet          Node multinode-266826 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s                node-controller  Node multinode-266826 event: Registered Node multinode-266826 in Controller
	  Normal  NodeReady                55s                kubelet          Node multinode-266826 status is now: NodeReady
	
	
	Name:               multinode-266826-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-266826-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=multinode-266826
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_26T22_04_45_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 22:04:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-266826-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 22:05:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 22:05:18 +0000   Tue, 26 Dec 2023 22:04:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 22:05:18 +0000   Tue, 26 Dec 2023 22:04:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 22:05:18 +0000   Tue, 26 Dec 2023 22:04:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 22:05:18 +0000   Tue, 26 Dec 2023 22:05:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-266826-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 88b13388169d4f9a9034d3d048ba6b4d
	  System UUID:                05ce3979-9032-4647-ba00-7b418dbeae6a
	  Boot ID:                    86db03b9-ef11-43ea-be40-040b33a40e54
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8vrwf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-p9nd6               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      41s
	  kube-system                 kube-proxy-7fj8c            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38s                kube-proxy       
	  Normal  NodeHasSufficientMemory  41s (x5 over 43s)  kubelet          Node multinode-266826-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x5 over 43s)  kubelet          Node multinode-266826-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x5 over 43s)  kubelet          Node multinode-266826-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node multinode-266826-m02 event: Registered Node multinode-266826-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-266826-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.004919] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006671] FS-Cache: N-cookie d=00000000ddec32f3{9p.inode} n=00000000627f747c
	[  +0.008752] FS-Cache: N-key=[8] '91a00f0200000000'
	[  +0.253422] FS-Cache: Duplicate cookie detected
	[  +0.004671] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006764] FS-Cache: O-cookie d=00000000ddec32f3{9p.inode} n=000000008d706fc0
	[  +0.007354] FS-Cache: O-key=[8] '99a00f0200000000'
	[  +0.004955] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007949] FS-Cache: N-cookie d=00000000ddec32f3{9p.inode} n=00000000b5f489b5
	[  +0.007343] FS-Cache: N-key=[8] '99a00f0200000000'
	[  +4.728524] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec26 21:56] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[  +1.016201] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[  +2.019787] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[  +4.155618] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[  +8.191113] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[ +16.130329] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	[Dec26 21:57] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 43 c5 69 e0 1a 5a 30 9b d2 92 97 08 00
	
	
	==> etcd [b8fa7d6be05a5b9cda2b593e289dc25a9839c7e0f0166d7f7f65831a1516ffaf] <==
	{"level":"info","ts":"2023-12-26T22:04:10.682337Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-26T22:04:10.682341Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-26T22:04:10.68236Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-26T22:04:11.171373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-26T22:04:11.171422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-26T22:04:11.171453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-12-26T22:04:11.17147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-26T22:04:11.171478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-26T22:04:11.171491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-12-26T22:04:11.171504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-26T22:04:11.172448Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-266826 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-26T22:04:11.172459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T22:04:11.172501Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T22:04:11.172522Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:04:11.17268Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-26T22:04:11.172734Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-26T22:04:11.173339Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:04:11.173424Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:04:11.173455Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:04:11.173763Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-12-26T22:04:11.173912Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-26T22:04:38.903403Z","caller":"traceutil/trace.go:171","msg":"trace[1399820192] linearizableReadLoop","detail":"{readStateIndex:458; appliedIndex:457; }","duration":"165.770231ms","start":"2023-12-26T22:04:38.737615Z","end":"2023-12-26T22:04:38.903385Z","steps":["trace[1399820192] 'read index received'  (duration: 165.649468ms)","trace[1399820192] 'applied index is now lower than readState.Index'  (duration: 119.423µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-26T22:04:38.903504Z","caller":"traceutil/trace.go:171","msg":"trace[705716882] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"196.988615ms","start":"2023-12-26T22:04:38.706497Z","end":"2023-12-26T22:04:38.903486Z","steps":["trace[705716882] 'process raft request'  (duration: 196.779548ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T22:04:38.903563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.949215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T22:04:38.9036Z","caller":"traceutil/trace.go:171","msg":"trace[610122974] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:443; }","duration":"166.00474ms","start":"2023-12-26T22:04:38.737584Z","end":"2023-12-26T22:04:38.903589Z","steps":["trace[610122974] 'agreement among raft nodes before linearized reading'  (duration: 165.879687ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:05:26 up 47 min,  0 users,  load average: 0.56, 0.78, 0.62
	Linux multinode-266826 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [d02fd47012da9a8bdb5af139586b5b867e9092f1b3435a3f6e1ff046d9719d71] <==
	podIP = 192.168.58.2
	I1226 22:04:30.666589       1 main.go:116] setting mtu 1500 for CNI 
	I1226 22:04:30.666608       1 main.go:146] kindnetd IP family: "ipv4"
	I1226 22:04:30.666625       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1226 22:04:31.059663       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:04:31.059700       1 main.go:227] handling current node
	I1226 22:04:41.073927       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:04:41.073950       1 main.go:227] handling current node
	I1226 22:04:51.086231       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:04:51.086254       1 main.go:227] handling current node
	I1226 22:04:51.086263       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1226 22:04:51.086268       1 main.go:250] Node multinode-266826-m02 has CIDR [10.244.1.0/24] 
	I1226 22:04:51.086412       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1226 22:05:01.097995       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:05:01.098021       1 main.go:227] handling current node
	I1226 22:05:01.098030       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1226 22:05:01.098034       1 main.go:250] Node multinode-266826-m02 has CIDR [10.244.1.0/24] 
	I1226 22:05:11.110115       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:05:11.110137       1 main.go:227] handling current node
	I1226 22:05:11.110145       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1226 22:05:11.110150       1 main.go:250] Node multinode-266826-m02 has CIDR [10.244.1.0/24] 
	I1226 22:05:21.114518       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:05:21.114541       1 main.go:227] handling current node
	I1226 22:05:21.114550       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1226 22:05:21.114555       1 main.go:250] Node multinode-266826-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [989be8f823455d61f8d14e519581eeb6ecbd3414312102c6319031429b728c18] <==
	I1226 22:04:12.855733       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1226 22:04:12.855771       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1226 22:04:12.855837       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1226 22:04:12.855858       1 aggregator.go:166] initial CRD sync complete...
	I1226 22:04:12.855865       1 autoregister_controller.go:141] Starting autoregister controller
	I1226 22:04:12.855871       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1226 22:04:12.855876       1 cache.go:39] Caches are synced for autoregister controller
	I1226 22:04:12.855960       1 shared_informer.go:318] Caches are synced for configmaps
	I1226 22:04:12.857094       1 controller.go:624] quota admission added evaluator for: namespaces
	I1226 22:04:12.871567       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1226 22:04:13.707034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1226 22:04:13.710289       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1226 22:04:13.710307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1226 22:04:14.069140       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1226 22:04:14.101990       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1226 22:04:14.169124       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1226 22:04:14.174430       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1226 22:04:14.175313       1 controller.go:624] quota admission added evaluator for: endpoints
	I1226 22:04:14.181281       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1226 22:04:14.777354       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1226 22:04:15.854088       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1226 22:04:15.863423       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1226 22:04:15.871642       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1226 22:04:29.447078       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1226 22:04:29.562116       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ee377d38dbeafdb3ac62cd67e7eab7cd47d714a3acaa04ff3e168621ef69c555] <==
	I1226 22:04:32.074092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.166µs"
	I1226 22:04:32.091528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.97592ms"
	I1226 22:04:32.091635       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.03µs"
	I1226 22:04:33.791496       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1226 22:04:45.402927       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-266826-m02\" does not exist"
	I1226 22:04:45.408868       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-266826-m02" podCIDRs=["10.244.1.0/24"]
	I1226 22:04:45.413579       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7fj8c"
	I1226 22:04:45.415746       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-p9nd6"
	I1226 22:04:48.794079       1 event.go:307] "Event occurred" object="multinode-266826-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-266826-m02 event: Registered Node multinode-266826-m02 in Controller"
	I1226 22:04:48.794085       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-266826-m02"
	I1226 22:05:18.972425       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-266826-m02"
	I1226 22:05:21.452778       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1226 22:05:21.463437       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8vrwf"
	I1226 22:05:21.468198       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-25lpb"
	I1226 22:05:21.474705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="21.276029ms"
	I1226 22:05:21.480905       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.092048ms"
	I1226 22:05:21.480994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.053µs"
	I1226 22:05:21.481036       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.438µs"
	I1226 22:05:21.483070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.484µs"
	I1226 22:05:21.484749       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="39.352µs"
	I1226 22:05:22.984825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.818342ms"
	I1226 22:05:22.984935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.102µs"
	I1226 22:05:23.169641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.852217ms"
	I1226 22:05:23.169725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.915µs"
	I1226 22:05:23.809004       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-8vrwf" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-8vrwf"
	
	
	==> kube-proxy [208a22ce3d13af464aa2c2a41fb4b083eb1a4ede07ed5adca6b124e759d0144d] <==
	I1226 22:04:30.615579       1 server_others.go:69] "Using iptables proxy"
	I1226 22:04:30.623412       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1226 22:04:30.640376       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1226 22:04:30.641949       1 server_others.go:152] "Using iptables Proxier"
	I1226 22:04:30.641973       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1226 22:04:30.641987       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1226 22:04:30.642011       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 22:04:30.642303       1 server.go:846] "Version info" version="v1.28.4"
	I1226 22:04:30.642322       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 22:04:30.643465       1 config.go:188] "Starting service config controller"
	I1226 22:04:30.643562       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 22:04:30.643639       1 config.go:315] "Starting node config controller"
	I1226 22:04:30.643693       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 22:04:30.643659       1 config.go:97] "Starting endpoint slice config controller"
	I1226 22:04:30.643869       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 22:04:30.744310       1 shared_informer.go:318] Caches are synced for node config
	I1226 22:04:30.744348       1 shared_informer.go:318] Caches are synced for service config
	I1226 22:04:30.745445       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6cc3600c88aa92acf4c3bde0f958a57f3b9186f68630feeeb1330d139703afcd] <==
	E1226 22:04:12.865958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1226 22:04:12.866124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 22:04:12.866045       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1226 22:04:12.866079       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 22:04:12.866155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1226 22:04:12.865837       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 22:04:12.866174       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 22:04:12.866185       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1226 22:04:12.866210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1226 22:04:12.866209       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1226 22:04:12.866224       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1226 22:04:12.866240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1226 22:04:12.866594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 22:04:12.866690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1226 22:04:12.866626       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1226 22:04:12.866802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1226 22:04:13.727676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 22:04:13.727704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1226 22:04:13.840777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1226 22:04:13.840815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1226 22:04:13.904416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 22:04:13.904446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1226 22:04:13.933065       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 22:04:13.933106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1226 22:04:14.358152       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 26 22:04:29 multinode-266826 kubelet[1586]: E1226 22:04:29.869703    1586 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 26 22:04:29 multinode-266826 kubelet[1586]: E1226 22:04:29.869744    1586 projected.go:198] Error preparing data for projected volume kube-api-access-djjmh for pod kube-system/kube-proxy-frq75: configmap "kube-root-ca.crt" not found
	Dec 26 22:04:29 multinode-266826 kubelet[1586]: E1226 22:04:29.869828    1586 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e47e4ce1-94e6-4f54-8ce4-717af6ef6e4b-kube-api-access-djjmh podName:e47e4ce1-94e6-4f54-8ce4-717af6ef6e4b nodeName:}" failed. No retries permitted until 2023-12-26 22:04:30.369799065 +0000 UTC m=+14.536749994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djjmh" (UniqueName: "kubernetes.io/projected/e47e4ce1-94e6-4f54-8ce4-717af6ef6e4b-kube-api-access-djjmh") pod "kube-proxy-frq75" (UID: "e47e4ce1-94e6-4f54-8ce4-717af6ef6e4b") : configmap "kube-root-ca.crt" not found
	Dec 26 22:04:29 multinode-266826 kubelet[1586]: E1226 22:04:29.870138    1586 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 26 22:04:29 multinode-266826 kubelet[1586]: E1226 22:04:29.870169    1586 projected.go:198] Error preparing data for projected volume kube-api-access-lbszf for pod kube-system/kindnet-vfmsx: configmap "kube-root-ca.crt" not found
	Dec 26 22:04:29 multinode-266826 kubelet[1586]: E1226 22:04:29.870215    1586 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6020a6f-beb5-41f6-a52a-732e7052efa7-kube-api-access-lbszf podName:e6020a6f-beb5-41f6-a52a-732e7052efa7 nodeName:}" failed. No retries permitted until 2023-12-26 22:04:30.370198353 +0000 UTC m=+14.537149282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lbszf" (UniqueName: "kubernetes.io/projected/e6020a6f-beb5-41f6-a52a-732e7052efa7-kube-api-access-lbszf") pod "kindnet-vfmsx" (UID: "e6020a6f-beb5-41f6-a52a-732e7052efa7") : configmap "kube-root-ca.crt" not found
	Dec 26 22:04:30 multinode-266826 kubelet[1586]: W1226 22:04:30.515500    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/crio-e2f704e84574065f9bf44085e7b4114e49578f12d34dd946b60e80f3c837b8eb WatchSource:0}: Error finding container e2f704e84574065f9bf44085e7b4114e49578f12d34dd946b60e80f3c837b8eb: Status 404 returned error can't find the container with id e2f704e84574065f9bf44085e7b4114e49578f12d34dd946b60e80f3c837b8eb
	Dec 26 22:04:30 multinode-266826 kubelet[1586]: W1226 22:04:30.515788    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/crio-2fb88aa7dd68c23a8a186d2e69af85a0351b255e8559f35242de95ed66457632 WatchSource:0}: Error finding container 2fb88aa7dd68c23a8a186d2e69af85a0351b255e8559f35242de95ed66457632: Status 404 returned error can't find the container with id 2fb88aa7dd68c23a8a186d2e69af85a0351b255e8559f35242de95ed66457632
	Dec 26 22:04:31 multinode-266826 kubelet[1586]: I1226 22:04:31.069947    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-frq75" podStartSLOduration=2.069900692 podCreationTimestamp="2023-12-26 22:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-26 22:04:31.069753385 +0000 UTC m=+15.236704319" watchObservedRunningTime="2023-12-26 22:04:31.069900692 +0000 UTC m=+15.236851628"
	Dec 26 22:04:31 multinode-266826 kubelet[1586]: I1226 22:04:31.079244    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-vfmsx" podStartSLOduration=2.079204666 podCreationTimestamp="2023-12-26 22:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-26 22:04:31.079114679 +0000 UTC m=+15.246065613" watchObservedRunningTime="2023-12-26 22:04:31.079204666 +0000 UTC m=+15.246155600"
	Dec 26 22:04:31 multinode-266826 kubelet[1586]: I1226 22:04:31.189563    1586 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 26 22:04:31 multinode-266826 kubelet[1586]: I1226 22:04:31.212822    1586 topology_manager.go:215] "Topology Admit Handler" podUID="1082508c-2f95-46f0-8ec7-f530272863d8" podNamespace="kube-system" podName="coredns-5dd5756b68-4p457"
	Dec 26 22:04:31 multinode-266826 kubelet[1586]: I1226 22:04:31.368142    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmsl\" (UniqueName: \"kubernetes.io/projected/1082508c-2f95-46f0-8ec7-f530272863d8-kube-api-access-zcmsl\") pod \"coredns-5dd5756b68-4p457\" (UID: \"1082508c-2f95-46f0-8ec7-f530272863d8\") " pod="kube-system/coredns-5dd5756b68-4p457"
	Dec 26 22:04:31 multinode-266826 kubelet[1586]: I1226 22:04:31.368192    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1082508c-2f95-46f0-8ec7-f530272863d8-config-volume\") pod \"coredns-5dd5756b68-4p457\" (UID: \"1082508c-2f95-46f0-8ec7-f530272863d8\") " pod="kube-system/coredns-5dd5756b68-4p457"
	Dec 26 22:04:31 multinode-266826 kubelet[1586]: W1226 22:04:31.563412    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/crio-ca30f587646bb3fe63cf75cd5225b1f4615d059fa8593cb77b336d38a79e7d8e WatchSource:0}: Error finding container ca30f587646bb3fe63cf75cd5225b1f4615d059fa8593cb77b336d38a79e7d8e: Status 404 returned error can't find the container with id ca30f587646bb3fe63cf75cd5225b1f4615d059fa8593cb77b336d38a79e7d8e
	Dec 26 22:04:31 multinode-266826 kubelet[1586]: I1226 22:04:31.974861    1586 topology_manager.go:215] "Topology Admit Handler" podUID="9b493a83-ad24-43ef-a212-44afe94ff921" podNamespace="kube-system" podName="storage-provisioner"
	Dec 26 22:04:32 multinode-266826 kubelet[1586]: I1226 22:04:32.073836    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4p457" podStartSLOduration=3.073787821 podCreationTimestamp="2023-12-26 22:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-26 22:04:32.073751488 +0000 UTC m=+16.240702421" watchObservedRunningTime="2023-12-26 22:04:32.073787821 +0000 UTC m=+16.240738754"
	Dec 26 22:04:32 multinode-266826 kubelet[1586]: I1226 22:04:32.173189    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbxc\" (UniqueName: \"kubernetes.io/projected/9b493a83-ad24-43ef-a212-44afe94ff921-kube-api-access-cdbxc\") pod \"storage-provisioner\" (UID: \"9b493a83-ad24-43ef-a212-44afe94ff921\") " pod="kube-system/storage-provisioner"
	Dec 26 22:04:32 multinode-266826 kubelet[1586]: I1226 22:04:32.173329    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9b493a83-ad24-43ef-a212-44afe94ff921-tmp\") pod \"storage-provisioner\" (UID: \"9b493a83-ad24-43ef-a212-44afe94ff921\") " pod="kube-system/storage-provisioner"
	Dec 26 22:04:32 multinode-266826 kubelet[1586]: W1226 22:04:32.607240    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/crio-f94e70841fddb1a6b4da68495f6419bb30afd37ed81cda9adef5191b892b8fc6 WatchSource:0}: Error finding container f94e70841fddb1a6b4da68495f6419bb30afd37ed81cda9adef5191b892b8fc6: Status 404 returned error can't find the container with id f94e70841fddb1a6b4da68495f6419bb30afd37ed81cda9adef5191b892b8fc6
	Dec 26 22:04:33 multinode-266826 kubelet[1586]: I1226 22:04:33.076131    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.076087123 podCreationTimestamp="2023-12-26 22:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-26 22:04:33.075629307 +0000 UTC m=+17.242580241" watchObservedRunningTime="2023-12-26 22:04:33.076087123 +0000 UTC m=+17.243038056"
	Dec 26 22:05:21 multinode-266826 kubelet[1586]: I1226 22:05:21.475253    1586 topology_manager.go:215] "Topology Admit Handler" podUID="d32c8d8a-2a0a-4531-8c45-067153054026" podNamespace="default" podName="busybox-5bc68d56bd-25lpb"
	Dec 26 22:05:21 multinode-266826 kubelet[1586]: I1226 22:05:21.634791    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69fb\" (UniqueName: \"kubernetes.io/projected/d32c8d8a-2a0a-4531-8c45-067153054026-kube-api-access-m69fb\") pod \"busybox-5bc68d56bd-25lpb\" (UID: \"d32c8d8a-2a0a-4531-8c45-067153054026\") " pod="default/busybox-5bc68d56bd-25lpb"
	Dec 26 22:05:21 multinode-266826 kubelet[1586]: W1226 22:05:21.831799    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/crio-5b4d265478960a21083d2e9e018f741167ca56113dce3a7eed19d80e2916ee5d WatchSource:0}: Error finding container 5b4d265478960a21083d2e9e018f741167ca56113dce3a7eed19d80e2916ee5d: Status 404 returned error can't find the container with id 5b4d265478960a21083d2e9e018f741167ca56113dce3a7eed19d80e2916ee5d
	Dec 26 22:05:23 multinode-266826 kubelet[1586]: I1226 22:05:23.165798    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-25lpb" podStartSLOduration=1.633332912 podCreationTimestamp="2023-12-26 22:05:21 +0000 UTC" firstStartedPulling="2023-12-26 22:05:21.835739299 +0000 UTC m=+66.002690223" lastFinishedPulling="2023-12-26 22:05:22.368150379 +0000 UTC m=+66.535101296" observedRunningTime="2023-12-26 22:05:23.165550385 +0000 UTC m=+67.332501306" watchObservedRunningTime="2023-12-26 22:05:23.165743985 +0000 UTC m=+67.332694919"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-266826 -n multinode-266826
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-266826 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.03s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.265702249.exe start -p running-upgrade-550844 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1226 22:15:07.494292   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.265702249.exe start -p running-upgrade-550844 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m25.730256984s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-550844 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-550844 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.606468404s)

                                                
                                                
-- stdout --
	* [running-upgrade-550844] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-550844 in cluster running-upgrade-550844
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Updating the running docker "running-upgrade-550844" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:16:29.983685  183749 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:16:29.983816  183749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:16:29.983827  183749 out.go:309] Setting ErrFile to fd 2...
	I1226 22:16:29.983834  183749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:16:29.984025  183749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	I1226 22:16:29.984561  183749 out.go:303] Setting JSON to false
	I1226 22:16:29.985800  183749 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3540,"bootTime":1703625450,"procs":425,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 22:16:29.985863  183749 start.go:138] virtualization: kvm guest
	I1226 22:16:29.988371  183749 out.go:177] * [running-upgrade-550844] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1226 22:16:29.989880  183749 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:16:29.991273  183749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:16:29.989922  183749 notify.go:220] Checking for updates...
	I1226 22:16:29.992928  183749 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 22:16:29.994442  183749 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 22:16:29.995938  183749 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1226 22:16:29.997330  183749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:16:29.999050  183749 config.go:182] Loaded profile config "running-upgrade-550844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1226 22:16:29.999074  183749 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 22:16:30.000773  183749 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1226 22:16:30.001978  183749 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:16:30.024836  183749 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:16:30.024968  183749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:16:30.088685  183749 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:71 SystemTime:2023-12-26 22:16:30.079629918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 22:16:30.088779  183749 docker.go:295] overlay module found
	I1226 22:16:30.090740  183749 out.go:177] * Using the docker driver based on existing profile
	I1226 22:16:30.092118  183749 start.go:298] selected driver: docker
	I1226 22:16:30.092133  183749 start.go:902] validating driver "docker" against &{Name:running-upgrade-550844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-550844 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 22:16:30.092241  183749 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:16:30.093036  183749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:16:30.152942  183749 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:71 SystemTime:2023-12-26 22:16:30.14361698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 22:16:30.153265  183749 cni.go:84] Creating CNI manager for ""
	I1226 22:16:30.153290  183749 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1226 22:16:30.153304  183749 start_flags.go:323] config:
	{Name:running-upgrade-550844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-550844 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1226 22:16:30.155270  183749 out.go:177] * Starting control plane node running-upgrade-550844 in cluster running-upgrade-550844
	I1226 22:16:30.156830  183749 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:16:30.158237  183749 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:16:30.159604  183749 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1226 22:16:30.159628  183749 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 22:16:30.177630  183749 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 22:16:30.177655  183749 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	W1226 22:16:30.204143  183749 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1226 22:16:30.204283  183749 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/running-upgrade-550844/config.json ...
	I1226 22:16:30.204371  183749 cache.go:107] acquiring lock: {Name:mkcca32fa4e44ee8cab2c12b1ea2c5be9c926aaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:16:30.204393  183749 cache.go:107] acquiring lock: {Name:mk877b437db095d0b6f4031cc4042d3cb45899b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:16:30.204427  183749 cache.go:107] acquiring lock: {Name:mk5024a46f7dc26cf5e5a158201f08c339a072c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:16:30.204484  183749 cache.go:115] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1226 22:16:30.204505  183749 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.1µs
	I1226 22:16:30.204531  183749 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1226 22:16:30.204469  183749 cache.go:107] acquiring lock: {Name:mkc11a21c60abe811b1a21747a76f5577081844d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:16:30.204535  183749 cache.go:107] acquiring lock: {Name:mk9e66cf5da3625106b9643ed9c9f5759e5e0416 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:16:30.204540  183749 cache.go:107] acquiring lock: {Name:mkd9273560581b15976f1deec9beca552f3b4850 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:16:30.204559  183749 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I1226 22:16:30.204619  183749 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1226 22:16:30.204644  183749 cache.go:107] acquiring lock: {Name:mk54a7f8936c275afec99d2f95d7dea2772be9df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:16:30.204666  183749 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1226 22:16:30.204716  183749 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1226 22:16:30.204738  183749 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1226 22:16:30.204551  183749 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:16:30.204392  183749 cache.go:107] acquiring lock: {Name:mkc58723b06625c14a4ad5d7a5172e2e2de66afc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:16:30.204669  183749 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I1226 22:16:30.204819  183749 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I1226 22:16:30.204870  183749 start.go:365] acquiring machines lock for running-upgrade-550844: {Name:mk96ba8980fac02ff7936ff9b33e131315b1ea8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:16:30.204995  183749 start.go:369] acquired machines lock for "running-upgrade-550844" in 99.351µs
	I1226 22:16:30.205019  183749 start.go:96] Skipping create...Using existing machine configuration
	I1226 22:16:30.205031  183749 fix.go:54] fixHost starting: m01
	I1226 22:16:30.205326  183749 cli_runner.go:164] Run: docker container inspect running-upgrade-550844 --format={{.State.Status}}
	I1226 22:16:30.205575  183749 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1226 22:16:30.205705  183749 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1226 22:16:30.205761  183749 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1226 22:16:30.205761  183749 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I1226 22:16:30.205706  183749 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1226 22:16:30.205708  183749 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I1226 22:16:30.205826  183749 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I1226 22:16:30.232248  183749 fix.go:102] recreateIfNeeded on running-upgrade-550844: state=Running err=<nil>
	W1226 22:16:30.232279  183749 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 22:16:30.233869  183749 out.go:177] * Updating the running docker "running-upgrade-550844" container ...
	I1226 22:16:30.235262  183749 machine.go:88] provisioning docker machine ...
	I1226 22:16:30.235298  183749 ubuntu.go:169] provisioning hostname "running-upgrade-550844"
	I1226 22:16:30.235363  183749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-550844
	I1226 22:16:30.259435  183749 main.go:141] libmachine: Using SSH client type: native
	I1226 22:16:30.259767  183749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32955 <nil> <nil>}
	I1226 22:16:30.259782  183749 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-550844 && echo "running-upgrade-550844" | sudo tee /etc/hostname
	I1226 22:16:30.379564  183749 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-550844
	
	I1226 22:16:30.379640  183749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-550844
	I1226 22:16:30.396092  183749 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1226 22:16:30.397601  183749 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1226 22:16:30.399365  183749 main.go:141] libmachine: Using SSH client type: native
	I1226 22:16:30.399693  183749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32955 <nil> <nil>}
	I1226 22:16:30.399719  183749 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-550844' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-550844/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-550844' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:16:30.423579  183749 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I1226 22:16:30.427080  183749 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I1226 22:16:30.452850  183749 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1226 22:16:30.465570  183749 cache.go:157] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1226 22:16:30.465597  183749 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 261.128527ms
	I1226 22:16:30.465609  183749 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1226 22:16:30.475731  183749 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I1226 22:16:30.502354  183749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:16:30.502381  183749 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-7214/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-7214/.minikube}
	I1226 22:16:30.502403  183749 ubuntu.go:177] setting up certificates
	I1226 22:16:30.502431  183749 provision.go:83] configureAuth start
	I1226 22:16:30.502483  183749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-550844
	I1226 22:16:30.535096  183749 provision.go:138] copyHostCerts
	I1226 22:16:30.535157  183749 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem, removing ...
	I1226 22:16:30.535167  183749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem
	I1226 22:16:30.535657  183749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem (1082 bytes)
	I1226 22:16:30.535788  183749 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem, removing ...
	I1226 22:16:30.535802  183749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem
	I1226 22:16:30.535844  183749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem (1123 bytes)
	I1226 22:16:30.535928  183749 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem, removing ...
	I1226 22:16:30.535939  183749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem
	I1226 22:16:30.535975  183749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem (1679 bytes)
	I1226 22:16:30.536113  183749 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-550844 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-550844]
	I1226 22:16:30.554268  183749 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I1226 22:16:30.679168  183749 provision.go:172] copyRemoteCerts
	I1226 22:16:30.679256  183749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:16:30.679306  183749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-550844
	I1226 22:16:30.705637  183749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/running-upgrade-550844/id_rsa Username:docker}
	I1226 22:16:30.800658  183749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:16:30.822825  183749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1226 22:16:30.847791  183749 cache.go:157] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1226 22:16:30.847821  183749 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 643.181758ms
	I1226 22:16:30.847835  183749 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1226 22:16:30.852290  183749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 22:16:30.878812  183749 provision.go:86] duration metric: configureAuth took 376.366824ms
	I1226 22:16:30.878843  183749 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:16:30.879089  183749 config.go:182] Loaded profile config "running-upgrade-550844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1226 22:16:30.879223  183749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-550844
	I1226 22:16:30.909572  183749 main.go:141] libmachine: Using SSH client type: native
	I1226 22:16:30.910162  183749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32955 <nil> <nil>}
	I1226 22:16:30.910183  183749 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:16:31.223885  183749 cache.go:157] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1226 22:16:31.223921  183749 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.019493071s
	I1226 22:16:31.223939  183749 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1226 22:16:31.456819  183749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:16:31.456853  183749 machine.go:91] provisioned docker machine in 1.221572104s
	I1226 22:16:31.456864  183749 start.go:300] post-start starting for "running-upgrade-550844" (driver="docker")
	I1226 22:16:31.456876  183749 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:16:31.456940  183749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:16:31.457002  183749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-550844
	I1226 22:16:31.491657  183749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/running-upgrade-550844/id_rsa Username:docker}
	I1226 22:16:31.585283  183749 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:16:31.589583  183749 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:16:31.589615  183749 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:16:31.589630  183749 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:16:31.589639  183749 info.go:137] Remote host: Ubuntu 19.10
	I1226 22:16:31.589651  183749 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/addons for local assets ...
	I1226 22:16:31.589706  183749 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/files for local assets ...
	I1226 22:16:31.589826  183749 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> 139762.pem in /etc/ssl/certs
	I1226 22:16:31.589966  183749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:16:31.599712  183749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem --> /etc/ssl/certs/139762.pem (1708 bytes)
	I1226 22:16:31.621357  183749 start.go:303] post-start completed in 164.477248ms
	I1226 22:16:31.621462  183749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:16:31.621508  183749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-550844
	I1226 22:16:31.641611  183749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/running-upgrade-550844/id_rsa Username:docker}
	I1226 22:16:31.683300  183749 cache.go:157] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1226 22:16:31.683331  183749 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.478972998s
	I1226 22:16:31.683346  183749 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1226 22:16:31.728006  183749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:16:31.732109  183749 fix.go:56] fixHost completed within 1.527071311s
	I1226 22:16:31.732134  183749 start.go:83] releasing machines lock for "running-upgrade-550844", held for 1.527122799s
	I1226 22:16:31.732206  183749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-550844
	I1226 22:16:31.755108  183749 ssh_runner.go:195] Run: cat /version.json
	I1226 22:16:31.755158  183749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:16:31.755172  183749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-550844
	I1226 22:16:31.755218  183749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-550844
	I1226 22:16:31.778424  183749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/running-upgrade-550844/id_rsa Username:docker}
	I1226 22:16:31.782991  183749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32955 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/running-upgrade-550844/id_rsa Username:docker}
	W1226 22:16:31.858273  183749 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1226 22:16:31.880983  183749 cache.go:157] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1226 22:16:31.881022  183749 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.676493334s
	I1226 22:16:31.881040  183749 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1226 22:16:31.898147  183749 cache.go:157] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1226 22:16:31.898173  183749 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.693731839s
	I1226 22:16:31.898184  183749 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1226 22:16:32.053840  183749 cache.go:157] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1226 22:16:32.053879  183749 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.849498437s
	I1226 22:16:32.053891  183749 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1226 22:16:32.053905  183749 cache.go:87] Successfully saved all images to host disk.
	I1226 22:16:32.053944  183749 ssh_runner.go:195] Run: systemctl --version
	I1226 22:16:32.058351  183749 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:16:32.111738  183749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:16:32.115751  183749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:16:32.132394  183749 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:16:32.132451  183749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:16:32.155983  183749 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 22:16:32.156015  183749 start.go:475] detecting cgroup driver to use...
	I1226 22:16:32.156050  183749 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:16:32.156117  183749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:16:32.180250  183749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:16:32.189602  183749 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:16:32.189661  183749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:16:32.199341  183749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:16:32.208231  183749 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1226 22:16:32.218071  183749 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1226 22:16:32.218137  183749 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:16:32.302863  183749 docker.go:219] disabling docker service ...
	I1226 22:16:32.302930  183749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:16:32.313175  183749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:16:32.324063  183749 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:16:32.412650  183749 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:16:32.486425  183749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:16:32.496063  183749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:16:32.508304  183749 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1226 22:16:32.508359  183749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:16:32.519765  183749 out.go:177] 
	W1226 22:16:32.521284  183749 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1226 22:16:32.521305  183749 out.go:239] * 
	* 
	W1226 22:16:32.522409  183749 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 22:16:32.523804  183749 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-550844 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-26 22:16:32.542394638 +0000 UTC m=+1923.176630546
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-550844
helpers_test.go:235: (dbg) docker inspect running-upgrade-550844:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "627a2974cf3158ea6888497e2d89307e273081a4ae296fbd7e4bc023ec033d35",
	        "Created": "2023-12-26T22:15:17.718620841Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 168349,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T22:15:21.23518707Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/627a2974cf3158ea6888497e2d89307e273081a4ae296fbd7e4bc023ec033d35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/627a2974cf3158ea6888497e2d89307e273081a4ae296fbd7e4bc023ec033d35/hostname",
	        "HostsPath": "/var/lib/docker/containers/627a2974cf3158ea6888497e2d89307e273081a4ae296fbd7e4bc023ec033d35/hosts",
	        "LogPath": "/var/lib/docker/containers/627a2974cf3158ea6888497e2d89307e273081a4ae296fbd7e4bc023ec033d35/627a2974cf3158ea6888497e2d89307e273081a4ae296fbd7e4bc023ec033d35-json.log",
	        "Name": "/running-upgrade-550844",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-550844:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/72af1f8efe50e0fac655bea4647ca89ed28fee8b654967ef1aabcdbb8ca57d5b-init/diff:/var/lib/docker/overlay2/09a3138857cb3050b6e44c877da6ac7bc83afbcd1ca370dd45d81a9142bbb79f/diff:/var/lib/docker/overlay2/92f95b251adde7fc52f67aaa29f346bfddd6b3a2afa2c194ece128beaa54dab6/diff:/var/lib/docker/overlay2/b76acc1c7019e4d6e91a1139d34e0461061884debc2eacc05dbf7dbdb11f0004/diff:/var/lib/docker/overlay2/a5819660b4089408ffcabdc934014359058ca505fd51ad64b765ddeeed79106e/diff:/var/lib/docker/overlay2/bacef93ca80aff0a2a96c3337e8a987ada6b8d1670e5b1c413a9ec6dc2477aac/diff:/var/lib/docker/overlay2/3894cb447ea1b3bb1b89fceddff47e52b3a951aeedf6b33e35090aae5452e205/diff:/var/lib/docker/overlay2/379a132e845ff92a89823b6d0129f376cf221c787aadb7485d55168d1b98cdbf/diff:/var/lib/docker/overlay2/c83135cfdc9c76980f0f1ec8e34ffe1bcc1ccb69577fcccf257d923613814f25/diff:/var/lib/docker/overlay2/4cf75c5e9710d25abbde42c88d262958a304b81bf53d70299ee0dd1f614f2672/diff:/var/lib/docker/overlay2/d38584
d46a8ce5e94a43e3dd5f82bfe575f50c1411aa7741eb6dfa99efefd325/diff:/var/lib/docker/overlay2/01e6a6949d46d9369de5fc9b8bf6ac6da96843328209fef98d6c3c694d3ff299/diff:/var/lib/docker/overlay2/932c6a656f7e46f5b53d131a994d78b2098d3ebe34b3c386346bb080787f2551/diff:/var/lib/docker/overlay2/d997b72e27b1eb992c5e32c1628638a85567827db037812a38e668be4538af1a/diff:/var/lib/docker/overlay2/89ed6a8fdcde19b18278e6daace799774c91c267eaf01a46802cbd541805ec9c/diff:/var/lib/docker/overlay2/ae1a12479ddb2faf438706ca6bba1130e33efc4981427ae139ae819a40a716d9/diff:/var/lib/docker/overlay2/e7d07e40dfccf1144cbf90b43b53e569388267067d1efcde4954c7694a996257/diff:/var/lib/docker/overlay2/0f1aa55c7726c649e4b989ecfbc98b25b1b8b3b6ee6a3f18ad13de63ada6302a/diff:/var/lib/docker/overlay2/04f8cd5164f29e8b12917eecee4f98a22f952261bf4bf187fbe6dd4791043bc1/diff:/var/lib/docker/overlay2/c59983ca7347938174a62a902d377b39361e2079422eb302e8927591291ce285/diff:/var/lib/docker/overlay2/5e115d67d58d3673d6700128352141450b6efee5c293fb5cef867131dcb239de/diff:/var/lib/d
ocker/overlay2/598b66ada1a7603495210d46bfd4ef650481c3460e9173384a6129e7598ba11d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72af1f8efe50e0fac655bea4647ca89ed28fee8b654967ef1aabcdbb8ca57d5b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72af1f8efe50e0fac655bea4647ca89ed28fee8b654967ef1aabcdbb8ca57d5b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72af1f8efe50e0fac655bea4647ca89ed28fee8b654967ef1aabcdbb8ca57d5b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-550844",
	                "Source": "/var/lib/docker/volumes/running-upgrade-550844/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-550844",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-550844",
	                "name.minikube.sigs.k8s.io": "running-upgrade-550844",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c94a484acff2ca859ca9a07f86e6c51871558da2df1d75eb2696b5688806c81",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32955"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32954"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32953"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4c94a484acff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "9db92ffd6bbc36351cde3d6ec311f79741a36b1264f52ab15158394180afa4b7",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "5f4814473c6c41b8af703c4f2bef56228ded1f6d7f5a5b91e9098de2d925374b",
	                    "EndpointID": "9db92ffd6bbc36351cde3d6ec311f79741a36b1264f52ab15158394180afa4b7",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-550844 -n running-upgrade-550844
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-550844 -n running-upgrade-550844: exit status 4 (311.942371ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 22:16:32.838110  184438 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-550844" does not appear in /home/jenkins/minikube-integration/17857-7214/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-550844" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-550844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-550844
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-550844: (5.317240517s)
--- FAIL: TestRunningBinaryUpgrade (94.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (65.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.2867199242.exe start -p stopped-upgrade-845381 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.2867199242.exe start -p stopped-upgrade-845381 --memory=2200 --vm-driver=docker  --container-runtime=crio: (58.939666999s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.2867199242.exe -p stopped-upgrade-845381 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.2867199242.exe -p stopped-upgrade-845381 stop: (1.036599207s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-845381 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-845381 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.908567347s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-845381] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-845381 in cluster stopped-upgrade-845381
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Restarting existing docker container for "stopped-upgrade-845381" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:17:38.574498  194651 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:17:38.574663  194651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:17:38.574675  194651 out.go:309] Setting ErrFile to fd 2...
	I1226 22:17:38.574682  194651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:17:38.574895  194651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	I1226 22:17:38.575447  194651 out.go:303] Setting JSON to false
	I1226 22:17:38.576828  194651 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3609,"bootTime":1703625450,"procs":482,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 22:17:38.576888  194651 start.go:138] virtualization: kvm guest
	I1226 22:17:38.579186  194651 out.go:177] * [stopped-upgrade-845381] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1226 22:17:38.581127  194651 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:17:38.582423  194651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:17:38.581154  194651 notify.go:220] Checking for updates...
	I1226 22:17:38.584791  194651 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 22:17:38.586007  194651 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 22:17:38.587266  194651 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1226 22:17:38.588553  194651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:17:38.590815  194651 config.go:182] Loaded profile config "stopped-upgrade-845381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1226 22:17:38.590837  194651 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 22:17:38.592596  194651 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1226 22:17:38.593905  194651 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:17:38.620877  194651 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:17:38.621011  194651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:17:38.680313  194651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:74 SystemTime:2023-12-26 22:17:38.670056427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 22:17:38.680410  194651 docker.go:295] overlay module found
	I1226 22:17:38.682152  194651 out.go:177] * Using the docker driver based on existing profile
	I1226 22:17:38.683675  194651 start.go:298] selected driver: docker
	I1226 22:17:38.683700  194651 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-845381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-845381 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 22:17:38.683813  194651 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:17:38.684869  194651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:17:38.759750  194651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:74 SystemTime:2023-12-26 22:17:38.751000255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 22:17:38.760068  194651 cni.go:84] Creating CNI manager for ""
	I1226 22:17:38.760089  194651 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1226 22:17:38.760100  194651 start_flags.go:323] config:
	{Name:stopped-upgrade-845381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-845381 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1226 22:17:38.762325  194651 out.go:177] * Starting control plane node stopped-upgrade-845381 in cluster stopped-upgrade-845381
	I1226 22:17:38.763759  194651 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:17:38.765311  194651 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:17:38.766541  194651 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1226 22:17:38.766572  194651 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 22:17:38.785292  194651 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 22:17:38.785320  194651 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	W1226 22:17:38.800249  194651 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1226 22:17:38.800447  194651 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/stopped-upgrade-845381/config.json ...
	I1226 22:17:38.800527  194651 cache.go:107] acquiring lock: {Name:mkd9273560581b15976f1deec9beca552f3b4850 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:17:38.800590  194651 cache.go:107] acquiring lock: {Name:mkc58723b06625c14a4ad5d7a5172e2e2de66afc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:17:38.800598  194651 cache.go:107] acquiring lock: {Name:mk9e66cf5da3625106b9643ed9c9f5759e5e0416 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:17:38.800604  194651 cache.go:107] acquiring lock: {Name:mk5024a46f7dc26cf5e5a158201f08c339a072c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:17:38.800552  194651 cache.go:107] acquiring lock: {Name:mk877b437db095d0b6f4031cc4042d3cb45899b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:17:38.800537  194651 cache.go:107] acquiring lock: {Name:mkcca32fa4e44ee8cab2c12b1ea2c5be9c926aaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:17:38.800621  194651 cache.go:107] acquiring lock: {Name:mk54a7f8936c275afec99d2f95d7dea2772be9df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:17:38.800658  194651 cache.go:115] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1226 22:17:38.800673  194651 cache.go:115] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1226 22:17:38.800674  194651 cache.go:115] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1226 22:17:38.800670  194651 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 83.266µs
	I1226 22:17:38.800652  194651 cache.go:115] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1226 22:17:38.800685  194651 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1226 22:17:38.800684  194651 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 86.482µs
	I1226 22:17:38.800676  194651 cache.go:115] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1226 22:17:38.800694  194651 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1226 22:17:38.800692  194651 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 190.383µs
	I1226 22:17:38.800704  194651 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1226 22:17:38.800706  194651 cache.go:115] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1226 22:17:38.800706  194651 cache.go:115] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1226 22:17:38.800719  194651 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 120.086µs
	I1226 22:17:38.800700  194651 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 168.437µs
	I1226 22:17:38.800730  194651 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1226 22:17:38.800730  194651 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:17:38.800537  194651 cache.go:107] acquiring lock: {Name:mkc11a21c60abe811b1a21747a76f5577081844d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:17:38.800758  194651 start.go:365] acquiring machines lock for stopped-upgrade-845381: {Name:mk530f7e92964f978b57f0e7775b5440eb5fdaa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:17:38.800735  194651 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1226 22:17:38.800726  194651 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 117.197µs
	I1226 22:17:38.800790  194651 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1226 22:17:38.800685  194651 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 141.85µs
	I1226 22:17:38.800799  194651 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1226 22:17:38.800865  194651 cache.go:115] /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1226 22:17:38.800868  194651 start.go:369] acquired machines lock for "stopped-upgrade-845381" in 89.573µs
	I1226 22:17:38.800882  194651 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 364.377µs
	I1226 22:17:38.800895  194651 start.go:96] Skipping create...Using existing machine configuration
	I1226 22:17:38.800915  194651 fix.go:54] fixHost starting: m01
	I1226 22:17:38.800896  194651 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1226 22:17:38.800938  194651 cache.go:87] Successfully saved all images to host disk.
	I1226 22:17:38.801198  194651 cli_runner.go:164] Run: docker container inspect stopped-upgrade-845381 --format={{.State.Status}}
	I1226 22:17:38.818619  194651 fix.go:102] recreateIfNeeded on stopped-upgrade-845381: state=Stopped err=<nil>
	W1226 22:17:38.818707  194651 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 22:17:38.820893  194651 out.go:177] * Restarting existing docker container for "stopped-upgrade-845381" ...
	I1226 22:17:38.822371  194651 cli_runner.go:164] Run: docker start stopped-upgrade-845381
	I1226 22:17:39.063569  194651 cli_runner.go:164] Run: docker container inspect stopped-upgrade-845381 --format={{.State.Status}}
	I1226 22:17:39.082607  194651 kic.go:430] container "stopped-upgrade-845381" state is running.
	I1226 22:17:39.083083  194651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-845381
	I1226 22:17:39.104395  194651 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/stopped-upgrade-845381/config.json ...
	I1226 22:17:39.104586  194651 machine.go:88] provisioning docker machine ...
	I1226 22:17:39.104612  194651 ubuntu.go:169] provisioning hostname "stopped-upgrade-845381"
	I1226 22:17:39.104656  194651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-845381
	I1226 22:17:39.120476  194651 main.go:141] libmachine: Using SSH client type: native
	I1226 22:17:39.120822  194651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I1226 22:17:39.120840  194651 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-845381 && echo "stopped-upgrade-845381" | sudo tee /etc/hostname
	I1226 22:17:39.121506  194651 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56640->127.0.0.1:32979: read: connection reset by peer
	I1226 22:17:42.238675  194651 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-845381
	
	I1226 22:17:42.238747  194651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-845381
	I1226 22:17:42.255216  194651 main.go:141] libmachine: Using SSH client type: native
	I1226 22:17:42.255551  194651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I1226 22:17:42.255569  194651 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-845381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-845381/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-845381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:17:42.371177  194651 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:17:42.371205  194651 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-7214/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-7214/.minikube}
	I1226 22:17:42.371225  194651 ubuntu.go:177] setting up certificates
	I1226 22:17:42.371235  194651 provision.go:83] configureAuth start
	I1226 22:17:42.371279  194651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-845381
	I1226 22:17:42.394687  194651 provision.go:138] copyHostCerts
	I1226 22:17:42.394874  194651 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem, removing ...
	I1226 22:17:42.394892  194651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem
	I1226 22:17:42.394967  194651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/ca.pem (1082 bytes)
	I1226 22:17:42.395068  194651 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem, removing ...
	I1226 22:17:42.395075  194651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem
	I1226 22:17:42.395109  194651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/cert.pem (1123 bytes)
	I1226 22:17:42.395171  194651 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem, removing ...
	I1226 22:17:42.395176  194651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem
	I1226 22:17:42.395272  194651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-7214/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-7214/.minikube/key.pem (1679 bytes)
	I1226 22:17:42.395351  194651 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-845381 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-845381]
	I1226 22:17:42.668961  194651 provision.go:172] copyRemoteCerts
	I1226 22:17:42.669019  194651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:17:42.669048  194651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-845381
	I1226 22:17:42.686703  194651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/stopped-upgrade-845381/id_rsa Username:docker}
	I1226 22:17:42.769930  194651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:17:42.787429  194651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1226 22:17:42.804624  194651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 22:17:42.821557  194651 provision.go:86] duration metric: configureAuth took 450.312998ms
	I1226 22:17:42.821580  194651 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:17:42.821732  194651 config.go:182] Loaded profile config "stopped-upgrade-845381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1226 22:17:42.821824  194651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-845381
	I1226 22:17:42.841556  194651 main.go:141] libmachine: Using SSH client type: native
	I1226 22:17:42.841924  194651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I1226 22:17:42.841945  194651 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:17:43.564215  194651 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:17:43.564246  194651 machine.go:91] provisioned docker machine in 4.459646263s
	I1226 22:17:43.564259  194651 start.go:300] post-start starting for "stopped-upgrade-845381" (driver="docker")
	I1226 22:17:43.564271  194651 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:17:43.564351  194651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:17:43.564398  194651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-845381
	I1226 22:17:43.583580  194651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/stopped-upgrade-845381/id_rsa Username:docker}
	I1226 22:17:43.670015  194651 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:17:43.672761  194651 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:17:43.672812  194651 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:17:43.672828  194651 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:17:43.672841  194651 info.go:137] Remote host: Ubuntu 19.10
	I1226 22:17:43.672854  194651 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/addons for local assets ...
	I1226 22:17:43.672915  194651 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-7214/.minikube/files for local assets ...
	I1226 22:17:43.673027  194651 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem -> 139762.pem in /etc/ssl/certs
	I1226 22:17:43.673153  194651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:17:43.679879  194651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/ssl/certs/139762.pem --> /etc/ssl/certs/139762.pem (1708 bytes)
	I1226 22:17:43.696470  194651 start.go:303] post-start completed in 132.198364ms
	I1226 22:17:43.696538  194651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:17:43.696582  194651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-845381
	I1226 22:17:43.712725  194651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/stopped-upgrade-845381/id_rsa Username:docker}
	I1226 22:17:43.795032  194651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:17:43.798585  194651 fix.go:56] fixHost completed within 4.997664487s
	I1226 22:17:43.798606  194651 start.go:83] releasing machines lock for "stopped-upgrade-845381", held for 4.997722521s
	I1226 22:17:43.798683  194651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-845381
	I1226 22:17:43.816362  194651 ssh_runner.go:195] Run: cat /version.json
	I1226 22:17:43.816421  194651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-845381
	I1226 22:17:43.816457  194651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:17:43.816516  194651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-845381
	I1226 22:17:43.833759  194651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/stopped-upgrade-845381/id_rsa Username:docker}
	I1226 22:17:43.833927  194651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/stopped-upgrade-845381/id_rsa Username:docker}
	W1226 22:17:43.926381  194651 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1226 22:17:43.926468  194651 ssh_runner.go:195] Run: systemctl --version
	I1226 22:17:43.961778  194651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:17:44.040910  194651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:17:44.045356  194651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:17:44.062034  194651 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:17:44.062092  194651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:17:44.087468  194651 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 22:17:44.087492  194651 start.go:475] detecting cgroup driver to use...
	I1226 22:17:44.087523  194651 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:17:44.087562  194651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:17:44.108748  194651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:17:44.117944  194651 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:17:44.118015  194651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:17:44.126834  194651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:17:44.135878  194651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1226 22:17:44.145079  194651 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1226 22:17:44.145130  194651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:17:44.210861  194651 docker.go:219] disabling docker service ...
	I1226 22:17:44.210946  194651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:17:44.220569  194651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:17:44.229483  194651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:17:44.304057  194651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:17:44.372336  194651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:17:44.381214  194651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:17:44.392999  194651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1226 22:17:44.393062  194651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:17:44.407890  194651 out.go:177] 
	W1226 22:17:44.409581  194651 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1226 22:17:44.409597  194651 out.go:239] * 
	* 
	W1226 22:17:44.410362  194651 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 22:17:44.411937  194651 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-845381 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (65.89s)

                                                
                                    

Test pass (283/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.12
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 4.72
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 8.94
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
25 TestDownloadOnlyKic 1.35
26 TestBinaryMirror 0.8
27 TestOffline 83.21
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 137.47
34 TestAddons/parallel/Registry 15.52
36 TestAddons/parallel/InspektorGadget 10.71
37 TestAddons/parallel/MetricsServer 6.7
38 TestAddons/parallel/HelmTiller 9.48
40 TestAddons/parallel/CSI 97.65
41 TestAddons/parallel/Headlamp 14.02
42 TestAddons/parallel/CloudSpanner 5.6
43 TestAddons/parallel/LocalPath 10.58
44 TestAddons/parallel/NvidiaDevicePlugin 5.7
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.12
49 TestAddons/StoppedEnableDisable 12.26
50 TestCertOptions 27.63
51 TestCertExpiration 223.91
53 TestForceSystemdFlag 30
54 TestForceSystemdEnv 38.32
56 TestKVMDriverInstallOrUpdate 3.2
60 TestErrorSpam/setup 24.47
61 TestErrorSpam/start 0.66
62 TestErrorSpam/status 0.93
63 TestErrorSpam/pause 1.57
64 TestErrorSpam/unpause 1.54
65 TestErrorSpam/stop 1.42
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 69.99
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 34.78
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.07
76 TestFunctional/serial/CacheCmd/cache/add_remote 2.93
77 TestFunctional/serial/CacheCmd/cache/add_local 1.11
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
79 TestFunctional/serial/CacheCmd/cache/list 0.06
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
82 TestFunctional/serial/CacheCmd/cache/delete 0.12
83 TestFunctional/serial/MinikubeKubectlCmd 0.12
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
85 TestFunctional/serial/ExtraConfig 31.86
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.32
88 TestFunctional/serial/LogsFileCmd 1.34
89 TestFunctional/serial/InvalidService 4.11
91 TestFunctional/parallel/ConfigCmd 0.46
92 TestFunctional/parallel/DashboardCmd 10.93
93 TestFunctional/parallel/DryRun 0.49
94 TestFunctional/parallel/InternationalLanguage 0.16
95 TestFunctional/parallel/StatusCmd 0.94
99 TestFunctional/parallel/ServiceCmdConnect 11.68
100 TestFunctional/parallel/AddonsCmd 0.16
101 TestFunctional/parallel/PersistentVolumeClaim 31.44
103 TestFunctional/parallel/SSHCmd 0.63
104 TestFunctional/parallel/CpCmd 1.84
105 TestFunctional/parallel/MySQL 23.88
106 TestFunctional/parallel/FileSync 0.35
107 TestFunctional/parallel/CertSync 1.87
111 TestFunctional/parallel/NodeLabels 0.11
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
115 TestFunctional/parallel/License 0.2
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.51
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.44
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
122 TestFunctional/parallel/ImageCommands/ImageBuild 2.24
123 TestFunctional/parallel/ImageCommands/Setup 0.97
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
131 TestFunctional/parallel/ProfileCmd/profile_list 0.57
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.61
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.37
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.75
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.5
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.05
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.73
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.2
147 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.79
149 TestFunctional/parallel/MountCmd/any-port 5.56
150 TestFunctional/parallel/MountCmd/specific-port 1.56
151 TestFunctional/parallel/ServiceCmd/List 0.88
152 TestFunctional/parallel/ServiceCmd/JSONOutput 0.92
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.1
154 TestFunctional/parallel/ServiceCmd/HTTPS 0.6
155 TestFunctional/parallel/ServiceCmd/Format 0.68
156 TestFunctional/parallel/ServiceCmd/URL 0.58
157 TestFunctional/delete_addon-resizer_images 0.07
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestIngressAddonLegacy/StartLegacyK8sCluster 70.94
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.22
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
170 TestJSONOutput/start/Command 68.94
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.63
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.58
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 5.78
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.22
195 TestKicCustomNetwork/create_custom_network 32.76
196 TestKicCustomNetwork/use_default_bridge_network 27.64
197 TestKicExistingNetwork 23.35
198 TestKicCustomSubnet 26.77
199 TestKicStaticIP 27.83
200 TestMainNoArgs 0.07
201 TestMinikubeProfile 48.08
204 TestMountStart/serial/StartWithMountFirst 8.09
205 TestMountStart/serial/VerifyMountFirst 0.25
206 TestMountStart/serial/StartWithMountSecond 5.48
207 TestMountStart/serial/VerifyMountSecond 0.25
208 TestMountStart/serial/DeleteFirst 1.6
209 TestMountStart/serial/VerifyMountPostDelete 0.24
210 TestMountStart/serial/Stop 1.2
211 TestMountStart/serial/RestartStopped 6.92
212 TestMountStart/serial/VerifyMountPostStop 0.25
215 TestMultiNode/serial/FreshStart2Nodes 84.61
216 TestMultiNode/serial/DeployApp2Nodes 3.36
218 TestMultiNode/serial/AddNode 16.52
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.27
221 TestMultiNode/serial/CopyFile 8.91
222 TestMultiNode/serial/StopNode 2.08
223 TestMultiNode/serial/StartAfterStop 10.88
224 TestMultiNode/serial/RestartKeepsNodes 112.01
225 TestMultiNode/serial/DeleteNode 4.61
226 TestMultiNode/serial/StopMultiNode 23.77
227 TestMultiNode/serial/RestartMultiNode 77.78
228 TestMultiNode/serial/ValidateNameConflict 26.72
233 TestPreload 119.65
235 TestScheduledStopUnix 97.86
238 TestInsufficientStorage 10.41
241 TestKubernetesUpgrade 365.39
242 TestMissingContainerUpgrade 141.84
245 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
248 TestNoKubernetes/serial/StartWithK8s 31.95
253 TestNetworkPlugins/group/false 8.82
257 TestNoKubernetes/serial/StartWithStopK8s 6.43
258 TestNoKubernetes/serial/Start 6.93
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
260 TestNoKubernetes/serial/ProfileList 1.66
261 TestNoKubernetes/serial/Stop 1.29
262 TestNoKubernetes/serial/StartNoArgs 8.35
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
264 TestStoppedBinaryUpgrade/Setup 0.34
266 TestStoppedBinaryUpgrade/MinikubeLogs 0.55
275 TestPause/serial/Start 42.26
276 TestNetworkPlugins/group/auto/Start 72.29
277 TestPause/serial/SecondStartNoReconfiguration 39.18
278 TestNetworkPlugins/group/kindnet/Start 40.15
279 TestNetworkPlugins/group/auto/KubeletFlags 0.28
280 TestNetworkPlugins/group/auto/NetCatPod 9.19
281 TestPause/serial/Pause 0.67
282 TestPause/serial/VerifyStatus 0.3
283 TestPause/serial/Unpause 0.6
284 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
285 TestPause/serial/PauseAgain 0.77
286 TestPause/serial/DeletePaused 2.61
287 TestNetworkPlugins/group/auto/DNS 0.18
288 TestNetworkPlugins/group/auto/Localhost 0.14
289 TestNetworkPlugins/group/auto/HairPin 0.13
290 TestPause/serial/VerifyDeletedResources 0.64
291 TestNetworkPlugins/group/calico/Start 64.55
292 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
293 TestNetworkPlugins/group/kindnet/NetCatPod 12.21
294 TestNetworkPlugins/group/kindnet/DNS 0.17
295 TestNetworkPlugins/group/kindnet/Localhost 0.13
296 TestNetworkPlugins/group/kindnet/HairPin 0.16
297 TestNetworkPlugins/group/custom-flannel/Start 58.24
298 TestNetworkPlugins/group/enable-default-cni/Start 80.32
299 TestNetworkPlugins/group/calico/ControllerPod 6.01
300 TestNetworkPlugins/group/calico/KubeletFlags 0.28
301 TestNetworkPlugins/group/calico/NetCatPod 9.2
302 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
303 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.17
304 TestNetworkPlugins/group/calico/DNS 0.14
305 TestNetworkPlugins/group/calico/Localhost 0.13
306 TestNetworkPlugins/group/calico/HairPin 0.13
307 TestNetworkPlugins/group/custom-flannel/DNS 0.16
308 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
309 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
310 TestNetworkPlugins/group/flannel/Start 60
311 TestNetworkPlugins/group/bridge/Start 38.01
312 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
313 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
314 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
315 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
316 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
318 TestStartStop/group/old-k8s-version/serial/FirstStart 115.1
319 TestNetworkPlugins/group/bridge/KubeletFlags 0.55
320 TestNetworkPlugins/group/bridge/NetCatPod 9.94
322 TestStartStop/group/no-preload/serial/FirstStart 59.66
323 TestNetworkPlugins/group/bridge/DNS 0.15
324 TestNetworkPlugins/group/bridge/Localhost 0.16
325 TestNetworkPlugins/group/bridge/HairPin 0.16
326 TestNetworkPlugins/group/flannel/ControllerPod 6.01
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
328 TestNetworkPlugins/group/flannel/NetCatPod 9.21
329 TestNetworkPlugins/group/flannel/DNS 0.15
330 TestNetworkPlugins/group/flannel/Localhost 0.15
331 TestNetworkPlugins/group/flannel/HairPin 0.14
333 TestStartStop/group/embed-certs/serial/FirstStart 40.66
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.49
336 TestStartStop/group/no-preload/serial/DeployApp 8.27
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
338 TestStartStop/group/no-preload/serial/Stop 12.04
339 TestStartStop/group/embed-certs/serial/DeployApp 7.27
340 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
341 TestStartStop/group/no-preload/serial/SecondStart 340.49
342 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
343 TestStartStop/group/embed-certs/serial/Stop 11.94
344 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
345 TestStartStop/group/embed-certs/serial/SecondStart 342.6
346 TestStartStop/group/old-k8s-version/serial/DeployApp 7.4
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.81
348 TestStartStop/group/old-k8s-version/serial/Stop 12.13
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.26
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
351 TestStartStop/group/old-k8s-version/serial/SecondStart 426.8
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.62
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
355 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 336.49
356 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
357 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
358 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.01
359 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
360 TestStartStop/group/no-preload/serial/Pause 3.52
362 TestStartStop/group/newest-cni/serial/FirstStart 35.86
363 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
364 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
365 TestStartStop/group/embed-certs/serial/Pause 3.63
366 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.01
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
369 TestStartStop/group/newest-cni/serial/Stop 1.58
370 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
371 TestStartStop/group/newest-cni/serial/SecondStart 27.33
372 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
373 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
374 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
375 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
377 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
378 TestStartStop/group/newest-cni/serial/Pause 2.91
379 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
381 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
382 TestStartStop/group/old-k8s-version/serial/Pause 2.92
x
+
TestDownloadOnly/v1.16.0/json-events (8.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-551448 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-551448 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.121124614s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-551448
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-551448: exit status 85 (81.768279ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-551448 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-551448        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:44:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:44:29.480105   13988 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:44:29.480402   13988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:29.480411   13988 out.go:309] Setting ErrFile to fd 2...
	I1226 21:44:29.480416   13988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:29.480588   13988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	W1226 21:44:29.480701   13988 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17857-7214/.minikube/config/config.json: open /home/jenkins/minikube-integration/17857-7214/.minikube/config/config.json: no such file or directory
	I1226 21:44:29.481390   13988 out.go:303] Setting JSON to true
	I1226 21:44:29.482429   13988 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1620,"bootTime":1703625450,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 21:44:29.482510   13988 start.go:138] virtualization: kvm guest
	I1226 21:44:29.485318   13988 out.go:97] [download-only-551448] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1226 21:44:29.487043   13988 out.go:169] MINIKUBE_LOCATION=17857
	W1226 21:44:29.485437   13988 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball: no such file or directory
	I1226 21:44:29.485520   13988 notify.go:220] Checking for updates...
	I1226 21:44:29.490216   13988 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:44:29.491911   13988 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 21:44:29.493798   13988 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 21:44:29.495457   13988 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1226 21:44:29.498329   13988 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 21:44:29.498621   13988 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 21:44:29.521233   13988 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:44:29.521312   13988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:44:29.900664   13988 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-12-26 21:44:29.891869364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 21:44:29.900772   13988 docker.go:295] overlay module found
	I1226 21:44:29.903022   13988 out.go:97] Using the docker driver based on user configuration
	I1226 21:44:29.903064   13988 start.go:298] selected driver: docker
	I1226 21:44:29.903072   13988 start.go:902] validating driver "docker" against <nil>
	I1226 21:44:29.903204   13988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:44:29.961835   13988 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-12-26 21:44:29.952532759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 21:44:29.962036   13988 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 21:44:29.962563   13988 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1226 21:44:29.962728   13988 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1226 21:44:29.964771   13988 out.go:169] Using Docker driver with root privileges
	I1226 21:44:29.966431   13988 cni.go:84] Creating CNI manager for ""
	I1226 21:44:29.966461   13988 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:44:29.966475   13988 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 21:44:29.966492   13988 start_flags.go:323] config:
	{Name:download-only-551448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-551448 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:44:29.968115   13988 out.go:97] Starting control plane node download-only-551448 in cluster download-only-551448
	I1226 21:44:29.968141   13988 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 21:44:29.969619   13988 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I1226 21:44:29.969651   13988 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1226 21:44:29.969782   13988 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 21:44:29.987927   13988 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 21:44:29.988110   13988 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 21:44:29.988192   13988 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 21:44:30.002466   13988 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1226 21:44:30.002518   13988 cache.go:56] Caching tarball of preloaded images
	I1226 21:44:30.002753   13988 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1226 21:44:30.005141   13988 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1226 21:44:30.005176   13988 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1226 21:44:30.042225   13988 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1226 21:44:33.555790   13988 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I1226 21:44:33.773752   13988 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1226 21:44:33.773842   13988 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-551448"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-551448 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-551448 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.718361146s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-551448
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-551448: exit status 85 (90.362702ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-551448 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-551448        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-551448 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-551448        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:44:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:44:37.687573   14143 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:44:37.687666   14143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:37.687673   14143 out.go:309] Setting ErrFile to fd 2...
	I1226 21:44:37.687678   14143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:37.687867   14143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	W1226 21:44:37.687979   14143 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17857-7214/.minikube/config/config.json: open /home/jenkins/minikube-integration/17857-7214/.minikube/config/config.json: no such file or directory
	I1226 21:44:37.688389   14143 out.go:303] Setting JSON to true
	I1226 21:44:37.689179   14143 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1628,"bootTime":1703625450,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 21:44:37.689238   14143 start.go:138] virtualization: kvm guest
	I1226 21:44:37.691264   14143 out.go:97] [download-only-551448] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1226 21:44:37.692841   14143 out.go:169] MINIKUBE_LOCATION=17857
	I1226 21:44:37.691429   14143 notify.go:220] Checking for updates...
	I1226 21:44:37.695575   14143 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:44:37.697242   14143 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 21:44:37.698706   14143 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 21:44:37.700153   14143 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-551448"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (8.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-551448 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-551448 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.942259233s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (8.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-551448
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-551448: exit status 85 (76.109541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-551448 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-551448           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-551448 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-551448           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-551448 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-551448           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:44:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:44:42.495495   14286 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:44:42.495712   14286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:42.495728   14286 out.go:309] Setting ErrFile to fd 2...
	I1226 21:44:42.495733   14286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:42.495986   14286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	W1226 21:44:42.496130   14286 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17857-7214/.minikube/config/config.json: open /home/jenkins/minikube-integration/17857-7214/.minikube/config/config.json: no such file or directory
	I1226 21:44:42.496702   14286 out.go:303] Setting JSON to true
	I1226 21:44:42.497874   14286 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1633,"bootTime":1703625450,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 21:44:42.497968   14286 start.go:138] virtualization: kvm guest
	I1226 21:44:42.500574   14286 out.go:97] [download-only-551448] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1226 21:44:42.502331   14286 out.go:169] MINIKUBE_LOCATION=17857
	I1226 21:44:42.500745   14286 notify.go:220] Checking for updates...
	I1226 21:44:42.505774   14286 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:44:42.507471   14286 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 21:44:42.509027   14286 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 21:44:42.510599   14286 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1226 21:44:42.513919   14286 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 21:44:42.514505   14286 config.go:182] Loaded profile config "download-only-551448": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1226 21:44:42.514587   14286 start.go:810] api.Load failed for download-only-551448: filestore "download-only-551448": Docker machine "download-only-551448" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 21:44:42.514738   14286 driver.go:392] Setting default libvirt URI to qemu:///system
	W1226 21:44:42.514800   14286 start.go:810] api.Load failed for download-only-551448: filestore "download-only-551448": Docker machine "download-only-551448" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 21:44:42.538973   14286 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:44:42.539098   14286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:44:42.596662   14286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-26 21:44:42.587092458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 21:44:42.596789   14286 docker.go:295] overlay module found
	I1226 21:44:42.598671   14286 out.go:97] Using the docker driver based on existing profile
	I1226 21:44:42.598715   14286 start.go:298] selected driver: docker
	I1226 21:44:42.598722   14286 start.go:902] validating driver "docker" against &{Name:download-only-551448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-551448 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:44:42.598898   14286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:44:42.658060   14286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-26 21:44:42.649715389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 21:44:42.658966   14286 cni.go:84] Creating CNI manager for ""
	I1226 21:44:42.658996   14286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:44:42.659007   14286 start_flags.go:323] config:
	{Name:download-only-551448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-551448 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1226 21:44:42.661360   14286 out.go:97] Starting control plane node download-only-551448 in cluster download-only-551448
	I1226 21:44:42.661403   14286 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 21:44:42.662975   14286 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I1226 21:44:42.662999   14286 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1226 21:44:42.663139   14286 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 21:44:42.678445   14286 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 21:44:42.678591   14286 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 21:44:42.678610   14286 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I1226 21:44:42.678614   14286 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I1226 21:44:42.678626   14286 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I1226 21:44:42.704753   14286 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1226 21:44:42.704788   14286 cache.go:56] Caching tarball of preloaded images
	I1226 21:44:42.704985   14286 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1226 21:44:42.707092   14286 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1226 21:44:42.707122   14286 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1226 21:44:42.740902   14286 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:2e182f4d7475b49e22eaf15ea22c281b -> /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1226 21:44:47.425143   14286 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1226 21:44:47.425228   14286 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17857-7214/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-551448"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-551448
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.35s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-088475 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-088475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-088475
--- PASS: TestDownloadOnlyKic (1.35s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-515766 --alsologtostderr --binary-mirror http://127.0.0.1:38627 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-515766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-515766
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (83.21s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-162428 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-162428 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m20.542396468s)
helpers_test.go:175: Cleaning up "offline-crio-162428" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-162428
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-162428: (2.66478287s)
--- PASS: TestOffline (83.21s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-989445
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-989445: exit status 85 (70.890069ms)

                                                
                                                
-- stdout --
	* Profile "addons-989445" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-989445"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-989445
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-989445: exit status 85 (72.326989ms)

                                                
                                                
-- stdout --
	* Profile "addons-989445" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-989445"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (137.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-989445 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-989445 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m17.470963993s)
--- PASS: TestAddons/Setup (137.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 15.695133ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5qthm" [25ca6ee7-e882-457e-890f-a491cac8bc8b] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005309583s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kwnsf" [19b23904-1fe8-4f7f-baac-ae30b2300171] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007221769s
addons_test.go:340: (dbg) Run:  kubectl --context addons-989445 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-989445 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-989445 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.984375337s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-989445 addons disable registry --alsologtostderr -v=1: (1.244703134s)
--- PASS: TestAddons/parallel/Registry (15.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8t67w" [d183f191-f3a8-4eb9-b0d5-50cfd24ff5e6] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004516753s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-989445
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-989445: (5.705492578s)
--- PASS: TestAddons/parallel/InspektorGadget (10.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 15.056889ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-vrsz7" [738a78bc-e7c3-4b71-b308-ca3539f9358f] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004993201s
addons_test.go:415: (dbg) Run:  kubectl --context addons-989445 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.48s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.423937ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-mtt89" [a1183792-1eb5-4966-bdcc-c273012f4542] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005241953s
addons_test.go:473: (dbg) Run:  kubectl --context addons-989445 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-989445 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.896953134s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (97.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 16.126642ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-989445 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/12/26 21:47:25 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-989445 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f4c3837f-71a6-494d-9b6b-ecc55c5573d0] Pending
helpers_test.go:344: "task-pv-pod" [f4c3837f-71a6-494d-9b6b-ecc55c5573d0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f4c3837f-71a6-494d-9b6b-ecc55c5573d0] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.006255349s
addons_test.go:584: (dbg) Run:  kubectl --context addons-989445 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-989445 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-989445 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-989445 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-989445 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-989445 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-989445 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7cc0c129-1dfa-4ca9-989e-7bbcc1377674] Pending
helpers_test.go:344: "task-pv-pod-restore" [7cc0c129-1dfa-4ca9-989e-7bbcc1377674] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7cc0c129-1dfa-4ca9-989e-7bbcc1377674] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003646771s
addons_test.go:626: (dbg) Run:  kubectl --context addons-989445 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-989445 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-989445 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-989445 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.607859935s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (97.65s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-989445 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-989445 --alsologtostderr -v=1: (1.010778041s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-f54vg" [ab0df752-d733-4b0a-8a34-4e64cb6dfd2f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-f54vg" [ab0df752-d733-4b0a-8a34-4e64cb6dfd2f] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005803564s
--- PASS: TestAddons/parallel/Headlamp (14.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-flqn5" [8796a6ef-ea86-48f5-9506-9cd876e62f0b] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00447331s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-989445
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-989445 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-989445 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989445 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bad745bd-3d8d-43a3-bd93-e68225a7de9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bad745bd-3d8d-43a3-bd93-e68225a7de9b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bad745bd-3d8d-43a3-bd93-e68225a7de9b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004311963s
addons_test.go:891: (dbg) Run:  kubectl --context addons-989445 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 ssh "cat /opt/local-path-provisioner/pvc-52463b29-232e-44d1-8a86-3781624e9cfa_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-989445 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-989445 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-989445 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zx8pr" [51833bfc-697b-48b5-bb36-fd13682ccab0] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006766579s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-989445
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-wn7nr" [4f8b405e-24f3-40f3-9790-4d27ef9c0cc3] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003949529s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-989445 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-989445 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-989445
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-989445: (11.962770073s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-989445
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-989445
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-989445
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (27.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-980849 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-980849 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.843675583s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-980849 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-980849 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-980849 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-980849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-980849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-980849: (1.960223305s)
--- PASS: TestCertOptions (27.63s)

                                                
                                    
x
+
TestCertExpiration (223.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-406722 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-406722 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.686542538s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-406722 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-406722 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.829007321s)
helpers_test.go:175: Cleaning up "cert-expiration-406722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-406722
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-406722: (2.389091696s)
--- PASS: TestCertExpiration (223.91s)

                                                
                                    
x
+
TestForceSystemdFlag (30s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-102944 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-102944 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.462213568s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-102944 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-102944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-102944
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-102944: (2.268839954s)
--- PASS: TestForceSystemdFlag (30.00s)

                                                
                                    
x
+
TestForceSystemdEnv (38.32s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-156510 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-156510 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.694970711s)
helpers_test.go:175: Cleaning up "force-systemd-env-156510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-156510
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-156510: (2.621995012s)
--- PASS: TestForceSystemdEnv (38.32s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.2s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.20s)

                                                
                                    
x
+
TestErrorSpam/setup (24.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-310986 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-310986 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-310986 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-310986 --driver=docker  --container-runtime=crio: (24.46926483s)
--- PASS: TestErrorSpam/setup (24.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 stop: (1.211771371s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-310986 --log_dir /tmp/nospam-310986 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17857-7214/.minikube/files/etc/test/nested/copy/13976/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-131935 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1226 21:52:11.686096   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:11.691784   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:11.702006   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:11.722247   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:11.762527   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:11.842880   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:12.003337   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:12.323826   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:12.964850   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:14.245322   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:16.806760   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:21.927399   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-131935 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.992236031s)
--- PASS: TestFunctional/serial/StartWithProxy (69.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-131935 --alsologtostderr -v=8
E1226 21:52:32.168359   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 21:52:52.649095   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-131935 --alsologtostderr -v=8: (34.783550055s)
functional_test.go:659: soft start took 34.784279334s for "functional-131935" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-131935 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-131935 cache add registry.k8s.io/pause:3.3: (1.00426554s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-131935 /tmp/TestFunctionalserialCacheCmdcacheadd_local3676238376/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 cache add minikube-local-cache-test:functional-131935
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 cache delete minikube-local-cache-test:functional-131935
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-131935
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-131935 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (273.450264ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 kubectl -- --context functional-131935 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-131935 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-131935 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1226 21:53:33.609544   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-131935 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.855833815s)
functional_test.go:757: restart took 31.855955659s for "functional-131935" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-131935 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-131935 logs: (1.3183909s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 logs --file /tmp/TestFunctionalserialLogsFileCmd2328386399/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-131935 logs --file /tmp/TestFunctionalserialLogsFileCmd2328386399/001/logs.txt: (1.341687595s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-131935 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-131935
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-131935: exit status 115 (329.250298ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30404 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-131935 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-131935 config get cpus: exit status 14 (77.401438ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-131935 config get cpus: exit status 14 (80.04546ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-131935 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-131935 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 52422: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.93s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-131935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-131935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (161.003492ms)

                                                
                                                
-- stdout --
	* [functional-131935] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 21:54:18.486072   50806 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:54:18.486220   50806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:54:18.486231   50806 out.go:309] Setting ErrFile to fd 2...
	I1226 21:54:18.486235   50806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:54:18.486471   50806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	I1226 21:54:18.487047   50806 out.go:303] Setting JSON to false
	I1226 21:54:18.488537   50806 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2209,"bootTime":1703625450,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 21:54:18.488638   50806 start.go:138] virtualization: kvm guest
	I1226 21:54:18.491028   50806 out.go:177] * [functional-131935] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1226 21:54:18.492597   50806 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 21:54:18.492643   50806 notify.go:220] Checking for updates...
	I1226 21:54:18.494204   50806 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:54:18.495648   50806 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 21:54:18.497112   50806 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 21:54:18.498385   50806 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1226 21:54:18.499704   50806 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 21:54:18.501488   50806 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:54:18.502144   50806 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 21:54:18.523605   50806 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:54:18.523701   50806 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:54:18.575769   50806 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-12-26 21:54:18.566962383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 21:54:18.576199   50806 docker.go:295] overlay module found
	I1226 21:54:18.578787   50806 out.go:177] * Using the docker driver based on existing profile
	I1226 21:54:18.580156   50806 start.go:298] selected driver: docker
	I1226 21:54:18.580167   50806 start.go:902] validating driver "docker" against &{Name:functional-131935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-131935 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:54:18.580248   50806 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 21:54:18.582363   50806 out.go:177] 
	W1226 21:54:18.583758   50806 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1226 21:54:18.585225   50806 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-131935 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-131935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-131935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (161.876925ms)

                                                
                                                
-- stdout --
	* [functional-131935] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 21:54:11.422488   48783 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:54:11.422759   48783 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:54:11.422768   48783 out.go:309] Setting ErrFile to fd 2...
	I1226 21:54:11.422773   48783 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:54:11.423034   48783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	I1226 21:54:11.423541   48783 out.go:303] Setting JSON to false
	I1226 21:54:11.424457   48783 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2202,"bootTime":1703625450,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 21:54:11.424514   48783 start.go:138] virtualization: kvm guest
	I1226 21:54:11.426912   48783 out.go:177] * [functional-131935] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1226 21:54:11.428436   48783 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 21:54:11.428486   48783 notify.go:220] Checking for updates...
	I1226 21:54:11.429871   48783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:54:11.431553   48783 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 21:54:11.432953   48783 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 21:54:11.434304   48783 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1226 21:54:11.435691   48783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 21:54:11.437758   48783 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:54:11.438256   48783 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 21:54:11.461344   48783 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:54:11.461472   48783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:54:11.512693   48783 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-12-26 21:54:11.504543834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 21:54:11.512791   48783 docker.go:295] overlay module found
	I1226 21:54:11.514514   48783 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1226 21:54:11.515873   48783 start.go:298] selected driver: docker
	I1226 21:54:11.515888   48783 start.go:902] validating driver "docker" against &{Name:functional-131935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-131935 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:54:11.515992   48783 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 21:54:11.518357   48783 out.go:177] 
	W1226 21:54:11.519863   48783 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1226 21:54:11.521223   48783 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-131935 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-131935 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-bnxck" [7a0e465e-e39e-475a-91cc-a4f8463f373a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-bnxck" [7a0e465e-e39e-475a-91cc-a4f8463f373a] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003963004s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31262
functional_test.go:1674: http://192.168.49.2:31262: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-bnxck

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31262
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [78abdc7b-b831-47da-985f-ab80751ff606] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003459687s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-131935 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-131935 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-131935 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-131935 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [81b04aba-582e-452a-95f2-fe3afaabf505] Pending
helpers_test.go:344: "sp-pod" [81b04aba-582e-452a-95f2-fe3afaabf505] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [81b04aba-582e-452a-95f2-fe3afaabf505] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.004188632s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-131935 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-131935 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-131935 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2fc88e5d-7b22-4105-93a4-cf1544c2676f] Pending
helpers_test.go:344: "sp-pod" [2fc88e5d-7b22-4105-93a4-cf1544c2676f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2fc88e5d-7b22-4105-93a4-cf1544c2676f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004959766s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-131935 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh -n functional-131935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 cp functional-131935:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd955640282/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh -n functional-131935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh -n functional-131935 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-131935 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-ppd6k" [27e379cd-7004-4b5b-aed8-3f512b80f769] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-ppd6k" [27e379cd-7004-4b5b-aed8-3f512b80f769] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004479499s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-131935 exec mysql-859648c796-ppd6k -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-131935 exec mysql-859648c796-ppd6k -- mysql -ppassword -e "show databases;": exit status 1 (140.749991ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-131935 exec mysql-859648c796-ppd6k -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-131935 exec mysql-859648c796-ppd6k -- mysql -ppassword -e "show databases;": exit status 1 (131.185407ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-131935 exec mysql-859648c796-ppd6k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/13976/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo cat /etc/test/nested/copy/13976/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/13976.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo cat /etc/ssl/certs/13976.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/13976.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo cat /usr/share/ca-certificates/13976.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/139762.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo cat /etc/ssl/certs/139762.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/139762.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo cat /usr/share/ca-certificates/139762.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-131935 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-131935 ssh "sudo systemctl is-active docker": exit status 1 (319.436006ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-131935 ssh "sudo systemctl is-active containerd": exit status 1 (357.846926ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-131935 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-131935
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-131935 image ls --format short --alsologtostderr:
I1226 21:54:19.605346   51887 out.go:296] Setting OutFile to fd 1 ...
I1226 21:54:19.605464   51887 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 21:54:19.605472   51887 out.go:309] Setting ErrFile to fd 2...
I1226 21:54:19.605477   51887 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 21:54:19.605646   51887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
I1226 21:54:19.606216   51887 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 21:54:19.606342   51887 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 21:54:19.606981   51887 cli_runner.go:164] Run: docker container inspect functional-131935 --format={{.State.Status}}
I1226 21:54:19.626833   51887 ssh_runner.go:195] Run: systemctl --version
I1226 21:54:19.626877   51887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-131935
I1226 21:54:19.649138   51887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/functional-131935/id_rsa Username:docker}
I1226 21:54:19.744151   51887 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-131935 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | alpine             | 529b5644c430c | 44.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| gcr.io/google-containers/addon-resizer  | functional-131935  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-131935 image ls --format table --alsologtostderr:
I1226 21:54:20.323296   52244 out.go:296] Setting OutFile to fd 1 ...
I1226 21:54:20.323429   52244 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 21:54:20.323438   52244 out.go:309] Setting ErrFile to fd 2...
I1226 21:54:20.323444   52244 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 21:54:20.323616   52244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
I1226 21:54:20.324170   52244 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 21:54:20.324266   52244 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 21:54:20.324640   52244 cli_runner.go:164] Run: docker container inspect functional-131935 --format={{.State.Status}}
I1226 21:54:20.340170   52244 ssh_runner.go:195] Run: systemctl --version
I1226 21:54:20.340213   52244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-131935
I1226 21:54:20.357116   52244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/functional-131935/id_rsa Username:docker}
I1226 21:54:20.546356   52244 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-131935 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"12722
6832"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9"],"
repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{
"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-131935"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeff
add65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":["docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44405005"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/c
oredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-131935 image ls --format json --alsologtostderr:
I1226 21:54:20.061747   52159 out.go:296] Setting OutFile to fd 1 ...
I1226 21:54:20.062037   52159 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 21:54:20.062050   52159 out.go:309] Setting ErrFile to fd 2...
I1226 21:54:20.062058   52159 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 21:54:20.062360   52159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
I1226 21:54:20.063205   52159 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 21:54:20.063362   52159 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 21:54:20.063960   52159 cli_runner.go:164] Run: docker container inspect functional-131935 --format={{.State.Status}}
I1226 21:54:20.084025   52159 ssh_runner.go:195] Run: systemctl --version
I1226 21:54:20.084080   52159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-131935
I1226 21:54:20.104874   52159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/functional-131935/id_rsa Username:docker}
I1226 21:54:20.191306   52159 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-131935 image ls --format yaml --alsologtostderr:
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests:
- docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "44405005"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-131935
size: "34114467"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-131935 image ls --format yaml --alsologtostderr:
I1226 21:54:19.808710   51968 out.go:296] Setting OutFile to fd 1 ...
I1226 21:54:19.808961   51968 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 21:54:19.808989   51968 out.go:309] Setting ErrFile to fd 2...
I1226 21:54:19.809005   51968 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 21:54:19.809305   51968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
I1226 21:54:19.810298   51968 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 21:54:19.810500   51968 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 21:54:19.811213   51968 cli_runner.go:164] Run: docker container inspect functional-131935 --format={{.State.Status}}
I1226 21:54:19.838319   51968 ssh_runner.go:195] Run: systemctl --version
I1226 21:54:19.838359   51968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-131935
I1226 21:54:19.858337   51968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/functional-131935/id_rsa Username:docker}
I1226 21:54:19.943388   51968 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-131935 ssh pgrep buildkitd: exit status 1 (315.29743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image build -t localhost/my-image:functional-131935 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-131935 image build -t localhost/my-image:functional-131935 testdata/build --alsologtostderr: (1.71255163s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-131935 image build -t localhost/my-image:functional-131935 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d8cc79cbb08
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-131935
--> 31fde22c8aa
Successfully tagged localhost/my-image:functional-131935
31fde22c8aaa3c3164b308de3349e72116e67cc69647c614a14d4b2021f549ea
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-131935 image build -t localhost/my-image:functional-131935 testdata/build --alsologtostderr:
I1226 21:54:20.174507   52200 out.go:296] Setting OutFile to fd 1 ...
I1226 21:54:20.174819   52200 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 21:54:20.174830   52200 out.go:309] Setting ErrFile to fd 2...
I1226 21:54:20.174835   52200 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 21:54:20.175046   52200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
I1226 21:54:20.175680   52200 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 21:54:20.176200   52200 config.go:182] Loaded profile config "functional-131935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 21:54:20.176660   52200 cli_runner.go:164] Run: docker container inspect functional-131935 --format={{.State.Status}}
I1226 21:54:20.196932   52200 ssh_runner.go:195] Run: systemctl --version
I1226 21:54:20.196987   52200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-131935
I1226 21:54:20.216685   52200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/functional-131935/id_rsa Username:docker}
I1226 21:54:20.365980   52200 build_images.go:151] Building image from path: /tmp/build.2626136571.tar
I1226 21:54:20.366038   52200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1226 21:54:20.376580   52200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2626136571.tar
I1226 21:54:20.380239   52200 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2626136571.tar: stat -c "%s %y" /var/lib/minikube/build/build.2626136571.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2626136571.tar': No such file or directory
I1226 21:54:20.380270   52200 ssh_runner.go:362] scp /tmp/build.2626136571.tar --> /var/lib/minikube/build/build.2626136571.tar (3072 bytes)
I1226 21:54:20.475845   52200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2626136571
I1226 21:54:20.486892   52200 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2626136571 -xf /var/lib/minikube/build/build.2626136571.tar
I1226 21:54:20.498928   52200 crio.go:297] Building image: /var/lib/minikube/build/build.2626136571
I1226 21:54:20.498993   52200 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-131935 /var/lib/minikube/build/build.2626136571 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1226 21:54:21.802544   52200 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-131935 /var/lib/minikube/build/build.2626136571 --cgroup-manager=cgroupfs: (1.303514922s)
I1226 21:54:21.802607   52200 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2626136571
I1226 21:54:21.810580   52200 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2626136571.tar
I1226 21:54:21.817543   52200 build_images.go:207] Built localhost/my-image:functional-131935 from /tmp/build.2626136571.tar
I1226 21:54:21.817565   52200 build_images.go:123] succeeded building to: functional-131935
I1226 21:54:21.817569   52200 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image ls
2023/12/26 21:54:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-131935
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-131935 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-131935 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-131935 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-131935 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45970: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "486.593802ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "82.51483ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "533.918349ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "79.644869ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-131935 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-131935 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [26d421dc-c034-4706-86d8-9c4b826c5781] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [26d421dc-c034-4706-86d8-9c4b826c5781] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.003936807s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image load --daemon gcr.io/google-containers/addon-resizer:functional-131935 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-131935 image load --daemon gcr.io/google-containers/addon-resizer:functional-131935 --alsologtostderr: (4.526914067s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-131935
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image load --daemon gcr.io/google-containers/addon-resizer:functional-131935 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-131935 image load --daemon gcr.io/google-containers/addon-resizer:functional-131935 --alsologtostderr: (5.573409564s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-131935 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.141.8 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-131935 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image save gcr.io/google-containers/addon-resizer:functional-131935 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-131935 image save gcr.io/google-containers/addon-resizer:functional-131935 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.05489539s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image rm gcr.io/google-containers/addon-resizer:functional-131935 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-131935 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.986633918s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-131935 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-131935 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-g2pbr" [b8ddf64b-c271-4bb7-8724-f9213264c16b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-g2pbr" [b8ddf64b-c271-4bb7-8724-f9213264c16b] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003703889s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-131935
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 image save --daemon gcr.io/google-containers/addon-resizer:functional-131935 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-131935
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdany-port3466835855/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1703627651528200660" to /tmp/TestFunctionalparallelMountCmdany-port3466835855/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1703627651528200660" to /tmp/TestFunctionalparallelMountCmdany-port3466835855/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1703627651528200660" to /tmp/TestFunctionalparallelMountCmdany-port3466835855/001/test-1703627651528200660
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-131935 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (264.962357ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 26 21:54 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 26 21:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 26 21:54 test-1703627651528200660
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh cat /mount-9p/test-1703627651528200660
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-131935 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fef8d724-d0b7-4353-a93e-0e43da3debc2] Pending
helpers_test.go:344: "busybox-mount" [fef8d724-d0b7-4353-a93e-0e43da3debc2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fef8d724-d0b7-4353-a93e-0e43da3debc2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fef8d724-d0b7-4353-a93e-0e43da3debc2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004072536s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-131935 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdany-port3466835855/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdspecific-port2999536466/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-131935 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (271.232008ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdspecific-port2999536466/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-131935 ssh "sudo umount -f /mount-9p": exit status 1 (283.490763ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-131935 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdspecific-port2999536466/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 service list -o json
functional_test.go:1493: Took "916.478172ms" to run "out/minikube-linux-amd64 -p functional-131935 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665896633/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665896633/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665896633/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-131935 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665896633/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665896633/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-131935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665896633/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31807
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-131935 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31807
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-131935
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-131935
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-131935
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (70.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-038954 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1226 21:54:55.530036   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-038954 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m10.938222306s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (70.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-038954 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-038954 addons enable ingress --alsologtostderr -v=5: (10.218132129s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-038954 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-499594 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1226 21:59:04.930804   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 21:59:25.411851   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 22:00:06.372835   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-499594 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m8.9395585s)
--- PASS: TestJSONOutput/start/Command (68.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-499594 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-499594 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-499594 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-499594 --output=json --user=testUser: (5.778587888s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-361010 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-361010 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.943709ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cbc072af-aa07-4265-921c-868d57d40a3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-361010] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed1ec5e0-933c-4ded-ae8b-3e1eeb2c6680","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17857"}}
	{"specversion":"1.0","id":"c235c3bd-ae00-4b5c-800d-16ccd829829f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6969405e-7eef-41d9-b185-ca173ed63320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig"}}
	{"specversion":"1.0","id":"4dccb6fb-c226-4ced-a2f2-a325cf3a201f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube"}}
	{"specversion":"1.0","id":"55b83a03-40a2-494b-a788-9b9df1e44d81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"080ef4e4-962b-4df8-bacc-1e4b73b7aac4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a043c621-b8fa-4be2-87ba-2446a1f16efe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-361010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-361010
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-738588 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-738588 --network=: (30.748386815s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-738588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-738588
E1226 22:00:55.728981   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:00:55.734243   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:00:55.744501   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:00:55.764743   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:00:55.805017   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:00:55.885336   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:00:56.045702   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:00:56.366266   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-738588: (1.997160524s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.76s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-604051 --network=bridge
E1226 22:00:57.007440   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:00:58.287652   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:01:00.848300   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:01:05.968907   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:01:16.209161   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-604051 --network=bridge: (25.703784564s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-604051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-604051
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-604051: (1.920412023s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.64s)

                                                
                                    
x
+
TestKicExistingNetwork (23.35s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-512625 --network=existing-network
E1226 22:01:28.293123   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
E1226 22:01:36.690268   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-512625 --network=existing-network: (21.306966536s)
helpers_test.go:175: Cleaning up "existing-network-512625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-512625
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-512625: (1.916747539s)
--- PASS: TestKicExistingNetwork (23.35s)

                                                
                                    
x
+
TestKicCustomSubnet (26.77s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-521872 --subnet=192.168.60.0/24
E1226 22:02:11.685773   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-521872 --subnet=192.168.60.0/24: (24.657456579s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-521872 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-521872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-521872
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-521872: (2.0911396s)
--- PASS: TestKicCustomSubnet (26.77s)

                                                
                                    
x
+
TestKicStaticIP (27.83s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-166733 --static-ip=192.168.200.200
E1226 22:02:17.651118   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-166733 --static-ip=192.168.200.200: (25.640223027s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-166733 ip
helpers_test.go:175: Cleaning up "static-ip-166733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-166733
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-166733: (2.059110785s)
--- PASS: TestKicStaticIP (27.83s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (48.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-235102 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-235102 --driver=docker  --container-runtime=crio: (20.93796903s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-237418 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-237418 --driver=docker  --container-runtime=crio: (22.067133737s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-235102
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-237418
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-237418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-237418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-237418: (1.847324607s)
helpers_test.go:175: Cleaning up "first-235102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-235102
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-235102: (2.253913802s)
--- PASS: TestMinikubeProfile (48.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-904576 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-904576 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.08680985s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-904576 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-918870 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1226 22:03:39.571631   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-918870 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.476174521s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-918870 ssh -- ls /minikube-host
E1226 22:03:44.447833   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-904576 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-904576 --alsologtostderr -v=5: (1.598097786s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-918870 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-918870
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-918870: (1.203970983s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.92s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-918870
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-918870: (5.922204235s)
--- PASS: TestMountStart/serial/RestartStopped (6.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-918870 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-266826 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1226 22:04:12.133300   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-266826 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m24.160393081s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-266826 -- rollout status deployment/busybox: (1.723618909s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-25lpb -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-8vrwf -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-25lpb -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-8vrwf -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-25lpb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266826 -- exec busybox-5bc68d56bd-8vrwf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.36s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-266826 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-266826 -v 3 --alsologtostderr: (15.944778695s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.52s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-266826 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp testdata/cp-test.txt multinode-266826:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp multinode-266826:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile821991223/001/cp-test_multinode-266826.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp multinode-266826:/home/docker/cp-test.txt multinode-266826-m02:/home/docker/cp-test_multinode-266826_multinode-266826-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m02 "sudo cat /home/docker/cp-test_multinode-266826_multinode-266826-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp multinode-266826:/home/docker/cp-test.txt multinode-266826-m03:/home/docker/cp-test_multinode-266826_multinode-266826-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m03 "sudo cat /home/docker/cp-test_multinode-266826_multinode-266826-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp testdata/cp-test.txt multinode-266826-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp multinode-266826-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile821991223/001/cp-test_multinode-266826-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp multinode-266826-m02:/home/docker/cp-test.txt multinode-266826:/home/docker/cp-test_multinode-266826-m02_multinode-266826.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826 "sudo cat /home/docker/cp-test_multinode-266826-m02_multinode-266826.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp multinode-266826-m02:/home/docker/cp-test.txt multinode-266826-m03:/home/docker/cp-test_multinode-266826-m02_multinode-266826-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m03 "sudo cat /home/docker/cp-test_multinode-266826-m02_multinode-266826-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp testdata/cp-test.txt multinode-266826-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp multinode-266826-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile821991223/001/cp-test_multinode-266826-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp multinode-266826-m03:/home/docker/cp-test.txt multinode-266826:/home/docker/cp-test_multinode-266826-m03_multinode-266826.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826 "sudo cat /home/docker/cp-test_multinode-266826-m03_multinode-266826.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 cp multinode-266826-m03:/home/docker/cp-test.txt multinode-266826-m02:/home/docker/cp-test_multinode-266826-m03_multinode-266826-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 ssh -n multinode-266826-m02 "sudo cat /home/docker/cp-test_multinode-266826-m03_multinode-266826-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-266826 node stop m03: (1.194928563s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-266826 status: exit status 7 (440.371705ms)

                                                
                                                
-- stdout --
	multinode-266826
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-266826-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-266826-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-266826 status --alsologtostderr: exit status 7 (447.695195ms)

                                                
                                                
-- stdout --
	multinode-266826
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-266826-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-266826-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:05:55.065905  112055 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:05:55.066045  112055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:05:55.066055  112055 out.go:309] Setting ErrFile to fd 2...
	I1226 22:05:55.066059  112055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:05:55.066273  112055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	I1226 22:05:55.066487  112055 out.go:303] Setting JSON to false
	I1226 22:05:55.066520  112055 mustload.go:65] Loading cluster: multinode-266826
	I1226 22:05:55.066632  112055 notify.go:220] Checking for updates...
	I1226 22:05:55.067046  112055 config.go:182] Loaded profile config "multinode-266826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:05:55.067065  112055 status.go:255] checking status of multinode-266826 ...
	I1226 22:05:55.067541  112055 cli_runner.go:164] Run: docker container inspect multinode-266826 --format={{.State.Status}}
	I1226 22:05:55.087641  112055 status.go:330] multinode-266826 host status = "Running" (err=<nil>)
	I1226 22:05:55.087677  112055 host.go:66] Checking if "multinode-266826" exists ...
	I1226 22:05:55.087897  112055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-266826
	I1226 22:05:55.104247  112055 host.go:66] Checking if "multinode-266826" exists ...
	I1226 22:05:55.104497  112055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:05:55.104544  112055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826
	I1226 22:05:55.119886  112055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826/id_rsa Username:docker}
	I1226 22:05:55.203867  112055 ssh_runner.go:195] Run: systemctl --version
	I1226 22:05:55.207632  112055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:05:55.218245  112055 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:05:55.269457  112055 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2023-12-26 22:05:55.261091422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 22:05:55.270098  112055 kubeconfig.go:92] found "multinode-266826" server: "https://192.168.58.2:8443"
	I1226 22:05:55.270128  112055 api_server.go:166] Checking apiserver status ...
	I1226 22:05:55.270179  112055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:05:55.280488  112055 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1432/cgroup
	I1226 22:05:55.288414  112055 api_server.go:182] apiserver freezer: "6:freezer:/docker/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/crio/crio-989be8f823455d61f8d14e519581eeb6ecbd3414312102c6319031429b728c18"
	I1226 22:05:55.288479  112055 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ed230b7e6557c20867e253d5c717cd5262c1c659284976821cc8dcb35ff714d1/crio/crio-989be8f823455d61f8d14e519581eeb6ecbd3414312102c6319031429b728c18/freezer.state
	I1226 22:05:55.295695  112055 api_server.go:204] freezer state: "THAWED"
	I1226 22:05:55.295718  112055 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1226 22:05:55.300088  112055 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1226 22:05:55.300107  112055 status.go:421] multinode-266826 apiserver status = Running (err=<nil>)
	I1226 22:05:55.300115  112055 status.go:257] multinode-266826 status: &{Name:multinode-266826 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1226 22:05:55.300145  112055 status.go:255] checking status of multinode-266826-m02 ...
	I1226 22:05:55.300363  112055 cli_runner.go:164] Run: docker container inspect multinode-266826-m02 --format={{.State.Status}}
	I1226 22:05:55.315734  112055 status.go:330] multinode-266826-m02 host status = "Running" (err=<nil>)
	I1226 22:05:55.315752  112055 host.go:66] Checking if "multinode-266826-m02" exists ...
	I1226 22:05:55.315973  112055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-266826-m02
	I1226 22:05:55.331661  112055 host.go:66] Checking if "multinode-266826-m02" exists ...
	I1226 22:05:55.331892  112055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:05:55.331925  112055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-266826-m02
	I1226 22:05:55.346500  112055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17857-7214/.minikube/machines/multinode-266826-m02/id_rsa Username:docker}
	I1226 22:05:55.431317  112055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:05:55.440896  112055 status.go:257] multinode-266826-m02 status: &{Name:multinode-266826-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1226 22:05:55.440942  112055 status.go:255] checking status of multinode-266826-m03 ...
	I1226 22:05:55.441218  112055 cli_runner.go:164] Run: docker container inspect multinode-266826-m03 --format={{.State.Status}}
	I1226 22:05:55.457157  112055 status.go:330] multinode-266826-m03 host status = "Stopped" (err=<nil>)
	I1226 22:05:55.457174  112055 status.go:343] host is not running, skipping remaining checks
	I1226 22:05:55.457179  112055 status.go:257] multinode-266826-m03 status: &{Name:multinode-266826-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 node start m03 --alsologtostderr
E1226 22:05:55.728377   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-266826 node start m03 --alsologtostderr: (10.229306945s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (112.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-266826
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-266826
E1226 22:06:23.412138   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-266826: (24.787779543s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-266826 --wait=true -v=8 --alsologtostderr
E1226 22:07:11.686752   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-266826 --wait=true -v=8 --alsologtostderr: (1m27.107231026s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-266826
--- PASS: TestMultiNode/serial/RestartKeepsNodes (112.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-266826 node delete m03: (4.055528995s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-266826 stop: (23.585983121s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-266826 status: exit status 7 (92.773818ms)

                                                
                                                
-- stdout --
	multinode-266826
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-266826-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-266826 status --alsologtostderr: exit status 7 (91.378908ms)

                                                
                                                
-- stdout --
	multinode-266826
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-266826-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:08:26.695220  122150 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:08:26.695363  122150 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:08:26.695372  122150 out.go:309] Setting ErrFile to fd 2...
	I1226 22:08:26.695376  122150 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:08:26.695538  122150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	I1226 22:08:26.695693  122150 out.go:303] Setting JSON to false
	I1226 22:08:26.695721  122150 mustload.go:65] Loading cluster: multinode-266826
	I1226 22:08:26.695813  122150 notify.go:220] Checking for updates...
	I1226 22:08:26.696052  122150 config.go:182] Loaded profile config "multinode-266826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:08:26.696068  122150 status.go:255] checking status of multinode-266826 ...
	I1226 22:08:26.696450  122150 cli_runner.go:164] Run: docker container inspect multinode-266826 --format={{.State.Status}}
	I1226 22:08:26.713050  122150 status.go:330] multinode-266826 host status = "Stopped" (err=<nil>)
	I1226 22:08:26.713090  122150 status.go:343] host is not running, skipping remaining checks
	I1226 22:08:26.713097  122150 status.go:257] multinode-266826 status: &{Name:multinode-266826 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1226 22:08:26.713119  122150 status.go:255] checking status of multinode-266826-m02 ...
	I1226 22:08:26.713343  122150 cli_runner.go:164] Run: docker container inspect multinode-266826-m02 --format={{.State.Status}}
	I1226 22:08:26.730100  122150 status.go:330] multinode-266826-m02 host status = "Stopped" (err=<nil>)
	I1226 22:08:26.730143  122150 status.go:343] host is not running, skipping remaining checks
	I1226 22:08:26.730154  122150 status.go:257] multinode-266826-m02 status: &{Name:multinode-266826-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (77.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-266826 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1226 22:08:34.730990   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 22:08:44.448232   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-266826 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.201582249s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266826 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (77.78s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-266826
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-266826-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-266826-m02 --driver=docker  --container-runtime=crio: exit status 14 (77.2442ms)

                                                
                                                
-- stdout --
	* [multinode-266826-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-266826-m02' is duplicated with machine name 'multinode-266826-m02' in profile 'multinode-266826'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-266826-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-266826-m03 --driver=docker  --container-runtime=crio: (24.463715888s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-266826
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-266826: exit status 80 (266.729646ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-266826
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-266826-m03 already exists in multinode-266826-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-266826-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-266826-m03: (1.854459729s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.72s)

                                                
                                    
x
+
TestPreload (119.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-702190 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1226 22:10:55.728729   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-702190 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m6.727118523s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-702190 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-702190
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-702190: (5.67342275s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-702190 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1226 22:12:11.686197   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-702190 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (44.091280256s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-702190 image list
helpers_test.go:175: Cleaning up "test-preload-702190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-702190
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-702190: (2.22581316s)
--- PASS: TestPreload (119.65s)

                                                
                                    
x
+
TestScheduledStopUnix (97.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-715327 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-715327 --memory=2048 --driver=docker  --container-runtime=crio: (21.974491447s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-715327 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-715327 -n scheduled-stop-715327
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-715327 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-715327 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-715327 -n scheduled-stop-715327
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-715327
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-715327 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1226 22:13:44.448580   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-715327
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-715327: exit status 7 (79.536614ms)

                                                
                                                
-- stdout --
	scheduled-stop-715327
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-715327 -n scheduled-stop-715327
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-715327 -n scheduled-stop-715327: exit status 7 (79.15104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-715327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-715327
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-715327: (4.452277589s)
--- PASS: TestScheduledStopUnix (97.86s)

                                                
                                    
x
+
TestInsufficientStorage (10.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-647517 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-647517 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.070787523s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1430fc8f-5e46-4c99-91b9-9568e2fe27a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-647517] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"79c565ae-cc17-4d15-afbe-3ba55f81d40b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17857"}}
	{"specversion":"1.0","id":"29d31531-b1b8-4f3d-a46a-f33290bed61a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"112c35a5-d126-47fa-b8b2-9e89938f455c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig"}}
	{"specversion":"1.0","id":"5c731dea-fb11-4cf8-91c9-8befc13bc706","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube"}}
	{"specversion":"1.0","id":"9587e230-5643-4235-8e46-8d66ed6a7277","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5324a59e-a94c-43a8-b9c4-6e919e7fae62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6d605611-210f-4545-bd55-7539063b0930","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2bffc8ef-d102-411e-8e63-354862adb0f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0687c74c-2add-4d23-ba9a-bf1afe70a5c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b82f6aa-af7d-4a83-bc1f-56176fc572a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"72e1eb6b-b2e1-4eec-b78d-7ab58a47ce6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-647517 in cluster insufficient-storage-647517","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"51374c72-295e-4032-af00-044ac49acbdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703498848-17857 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"696cf7bc-cf37-4861-940f-222d157fffab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2d77d57-6962-4542-b13c-4ab87caa7fa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-647517 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-647517 --output=json --layout=cluster: exit status 7 (260.107009ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-647517","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-647517","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 22:14:02.630134  143850 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-647517" does not appear in /home/jenkins/minikube-integration/17857-7214/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-647517 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-647517 --output=json --layout=cluster: exit status 7 (265.084517ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-647517","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-647517","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 22:14:02.895668  143939 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-647517" does not appear in /home/jenkins/minikube-integration/17857-7214/kubeconfig
	E1226 22:14:02.905043  143939 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/insufficient-storage-647517/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-647517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-647517
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-647517: (1.814369765s)
--- PASS: TestInsufficientStorage (10.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (365.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642745 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1226 22:15:55.728745   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-642745 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.526911958s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-642745
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-642745: (1.233098163s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-642745 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-642745 status --format={{.Host}}: exit status 7 (94.987528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642745 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-642745 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.373830143s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-642745 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642745 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-642745 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (116.236544ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-642745] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-642745
	    minikube start -p kubernetes-upgrade-642745 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6427452 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-642745 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642745 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1226 22:20:55.730114   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-642745 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.51243509s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-642745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-642745
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-642745: (2.417622967s)
--- PASS: TestKubernetesUpgrade (365.39s)

                                                
                                    
x
+
TestMissingContainerUpgrade (141.84s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.1692083289.exe start -p missing-upgrade-679370 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.1692083289.exe start -p missing-upgrade-679370 --memory=2200 --driver=docker  --container-runtime=crio: (1m6.101989956s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-679370
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-679370: (10.279085062s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-679370
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-679370 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1226 22:17:11.686325   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 22:17:18.772474   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-679370 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.03856154s)
helpers_test.go:175: Cleaning up "missing-upgrade-679370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-679370
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-679370: (2.019136067s)
--- PASS: TestMissingContainerUpgrade (141.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-135048 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-135048 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (104.428372ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-135048] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-135048 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-135048 --driver=docker  --container-runtime=crio: (31.591911999s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-135048 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (8.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-546972 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-546972 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (201.42165ms)

                                                
                                                
-- stdout --
	* [false-546972] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:14:08.951878  146029 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:14:08.952021  146029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:14:08.952031  146029 out.go:309] Setting ErrFile to fd 2...
	I1226 22:14:08.952036  146029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:14:08.952230  146029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-7214/.minikube/bin
	I1226 22:14:08.952826  146029 out.go:303] Setting JSON to false
	I1226 22:14:08.954588  146029 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3399,"bootTime":1703625450,"procs":613,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1226 22:14:08.954690  146029 start.go:138] virtualization: kvm guest
	I1226 22:14:08.957952  146029 out.go:177] * [false-546972] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1226 22:14:08.959972  146029 notify.go:220] Checking for updates...
	I1226 22:14:08.960001  146029 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:14:08.961505  146029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:14:08.962972  146029 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-7214/kubeconfig
	I1226 22:14:08.964505  146029 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-7214/.minikube
	I1226 22:14:08.965963  146029 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1226 22:14:08.967482  146029 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:14:08.969623  146029 config.go:182] Loaded profile config "NoKubernetes-135048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:14:08.969778  146029 config.go:182] Loaded profile config "force-systemd-env-156510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:14:08.969912  146029 config.go:182] Loaded profile config "offline-crio-162428": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:14:08.970025  146029 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:14:08.997316  146029 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:14:08.997425  146029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:14:09.072753  146029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:75 SystemTime:2023-12-26 22:14:09.058631699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1226 22:14:09.072843  146029 docker.go:295] overlay module found
	I1226 22:14:09.075627  146029 out.go:177] * Using the docker driver based on user configuration
	I1226 22:14:09.077240  146029 start.go:298] selected driver: docker
	I1226 22:14:09.077261  146029 start.go:902] validating driver "docker" against <nil>
	I1226 22:14:09.077287  146029 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:14:09.079562  146029 out.go:177] 
	W1226 22:14:09.080931  146029 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1226 22:14:09.082325  146029 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-546972 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-546972" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-546972

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546972"

                                                
                                                
----------------------- debugLogs end: false-546972 [took: 8.400127926s] --------------------------------
helpers_test.go:175: Cleaning up "false-546972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-546972
--- PASS: TestNetworkPlugins/group/false (8.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-135048 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-135048 --no-kubernetes --driver=docker  --container-runtime=crio: (4.223027718s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-135048 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-135048 status -o json: exit status 2 (289.722014ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-135048","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-135048
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-135048: (1.914037689s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-135048 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-135048 --no-kubernetes --driver=docker  --container-runtime=crio: (6.925714215s)
--- PASS: TestNoKubernetes/serial/Start (6.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-135048 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-135048 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.15374ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.013102574s)
--- PASS: TestNoKubernetes/serial/ProfileList (1.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-135048
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-135048: (1.28999428s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-135048 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-135048 --driver=docker  --container-runtime=crio: (8.353367723s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-135048 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-135048 "sudo systemctl is-active --quiet service kubelet": exit status 1 (320.893467ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-845381
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.55s)

                                                
                                    
x
+
TestPause/serial/Start (42.26s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-479774 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-479774 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (42.259457309s)
--- PASS: TestPause/serial/Start (42.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m12.291224459s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-479774 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-479774 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.155007764s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1226 22:18:44.447826   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.151941038s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-546972 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-546972 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dcx47" [60587396-078d-4c22-80a7-3287047a2ee9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dcx47" [60587396-078d-4c22-80a7-3287047a2ee9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003822116s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-479774 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-479774 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-479774 --output=json --layout=cluster: exit status 2 (302.924322ms)

                                                
                                                
-- stdout --
	{"Name":"pause-479774","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-479774","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-479774 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xl5pp" [f35b0422-1bbc-4dee-b1c7-02114ce0ba10] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00474116s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.77s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-479774 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.77s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-479774 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-479774 --alsologtostderr -v=5: (2.611108419s)
--- PASS: TestPause/serial/DeletePaused (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-546972 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-479774
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-479774: exit status 1 (14.737112ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-479774: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.552418943s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-546972 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-546972 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wbhnb" [27600cd3-1781-4e22-831f-acdb816fc573] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wbhnb" [27600cd3-1781-4e22-831f-acdb816fc573] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00433898s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-546972 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (58.242187001s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m20.316032718s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2b57q" [2de4a4cf-0b09-4130-a8e7-480eb58ed414] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00558662s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-546972 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-546972 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rfps7" [f4db52f4-d91f-450d-87fb-b7c139335b62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rfps7" [f4db52f4-d91f-450d-87fb-b7c139335b62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003773986s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-546972 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-546972 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2vclq" [b55bf482-d7ab-455c-affb-8316ec869c6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2vclq" [b55bf482-d7ab-455c-affb-8316ec869c6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004333891s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-546972 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-546972 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.994907797s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-546972 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (38.006325543s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-546972 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-546972 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-89t6w" [df3ffe81-ea98-440d-af99-e21691eb179b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-89t6w" [df3ffe81-ea98-440d-af99-e21691eb179b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004652356s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-546972 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (115.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-884510 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-884510 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m55.104721422s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (115.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-546972 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-546972 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s9qwh" [677fcc76-9b57-4d95-9743-8544fb426c74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s9qwh" [677fcc76-9b57-4d95-9743-8544fb426c74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00364901s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-446861 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-446861 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (59.656045341s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-546972 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vslqx" [0ecd958b-3a1a-414b-97b4-2539a638696e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004649411s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-546972 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-546972 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gkrdk" [1c8779f7-84a9-4394-843b-0105e238cefe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gkrdk" [1c8779f7-84a9-4394-843b-0105e238cefe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003886407s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-546972 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-546972 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)
E1226 22:29:22.600319   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:29:27.583396   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:29:34.787074   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:29:37.066172   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-070208 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-070208 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (40.654935015s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-365705 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-365705 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m11.489557025s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-446861 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1412e77c-50f6-46ce-b3e8-d4ad5c5b70d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1412e77c-50f6-46ce-b3e8-d4ad5c5b70d4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004025785s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-446861 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-446861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-446861 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-446861 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-446861 --alsologtostderr -v=3: (12.03914178s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-070208 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3ceb33e7-f735-49a3-b6ee-80f90cb497b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3ceb33e7-f735-49a3-b6ee-80f90cb497b3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003932503s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-070208 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-446861 -n no-preload-446861
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-446861 -n no-preload-446861: exit status 7 (77.908079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-446861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (340.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-446861 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-446861 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m40.023835074s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-446861 -n no-preload-446861
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (340.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-070208 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-070208 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-070208 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-070208 --alsologtostderr -v=3: (11.935296336s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070208 -n embed-certs-070208
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070208 -n embed-certs-070208: exit status 7 (86.308974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-070208 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (342.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-070208 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-070208 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m41.991853502s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070208 -n embed-certs-070208
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (342.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-884510 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eebf1354-7e2a-476c-a544-03e6b15d412d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eebf1354-7e2a-476c-a544-03e6b15d412d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003976849s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-884510 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-884510 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-884510 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-884510 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-884510 --alsologtostderr -v=3: (12.131307602s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-365705 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e8330c46-e58f-4e60-9c5e-ca7b8feb0aee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1226 22:23:44.447869   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e8330c46-e58f-4e60-9c5e-ca7b8feb0aee] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.00403404s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-365705 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-884510 -n old-k8s-version-884510
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-884510 -n old-k8s-version-884510: exit status 7 (84.1425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-884510 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (426.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-884510 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-884510 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m6.464747241s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-884510 -n old-k8s-version-884510
E1226 22:30:55.728089   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (426.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-365705 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-365705 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-365705 --alsologtostderr -v=3
E1226 22:23:59.897810   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:23:59.903183   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:23:59.913496   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:23:59.933785   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:23:59.974111   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:24:00.054999   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:24:00.215392   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:24:00.536621   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:24:01.177851   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:24:02.458753   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:24:05.018885   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-365705 --alsologtostderr -v=3: (14.61497225s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-365705 -n default-k8s-diff-port-365705
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-365705 -n default-k8s-diff-port-365705: exit status 7 (79.394865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-365705 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-365705 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1226 22:24:07.101978   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:07.107802   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:07.118090   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:07.138368   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:07.178706   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:07.259043   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:07.419440   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:07.740252   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:08.381222   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:09.662343   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:10.139372   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:24:12.223228   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:17.344159   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:20.380330   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:24:27.584311   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:24:40.861267   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:24:48.065044   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:25:14.731965   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 22:25:16.363934   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:16.369249   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:16.379550   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:16.399826   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:16.440134   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:16.520501   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:16.680980   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:17.001983   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:17.642467   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:18.923033   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:21.484148   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:21.821715   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:25:26.605356   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:27.620314   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:27.625636   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:27.635935   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:27.656202   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:27.696478   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:27.777328   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:27.937806   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:28.258449   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:28.899685   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:29.026020   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:25:30.180683   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:32.741910   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:36.845641   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:25:37.862596   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:48.103326   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:25:55.728033   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/ingress-addon-legacy-038954/client.crt: no such file or directory
E1226 22:25:57.325909   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:26:07.004309   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:07.009598   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:07.019856   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:07.040248   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:07.080572   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:07.161004   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:07.321736   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:07.642180   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:08.282930   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:08.584600   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:26:09.563726   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:12.124404   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:17.244704   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:27.485137   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:38.286377   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:26:38.753545   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:38.758892   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:38.769535   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:38.789899   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:38.830267   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:38.910724   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:39.071486   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:39.392020   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:40.033062   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:41.314026   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:43.742841   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
E1226 22:26:43.875141   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:47.965481   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:26:48.995879   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:26:49.545624   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:26:50.946368   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/kindnet-546972/client.crt: no such file or directory
E1226 22:26:53.222645   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:53.227926   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:53.238225   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:53.258515   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:53.298837   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:53.379242   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:53.539770   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:53.860207   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:54.500584   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:55.781677   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:58.342004   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:26:59.236993   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:27:03.462718   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:27:11.686035   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/addons-989445/client.crt: no such file or directory
E1226 22:27:13.703911   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:27:19.718213   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:27:28.926521   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
E1226 22:27:34.184327   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
E1226 22:28:00.207446   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
E1226 22:28:00.679117   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/bridge-546972/client.crt: no such file or directory
E1226 22:28:11.466473   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/custom-flannel-546972/client.crt: no such file or directory
E1226 22:28:15.144937   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/flannel-546972/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-365705 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m36.003777877s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-365705 -n default-k8s-diff-port-365705
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kg5gg" [06776686-5c3c-4fe0-813c-8003b98abdcd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1226 22:28:44.448362   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/functional-131935/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kg5gg" [06776686-5c3c-4fe0-813c-8003b98abdcd] Running
E1226 22:28:50.847681   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005553995s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kg5gg" [06776686-5c3c-4fe0-813c-8003b98abdcd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005271581s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-446861 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fkbvm" [506b4fa1-52be-4818-baaf-301c08897122] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1226 22:28:59.897003   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/auto-546972/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fkbvm" [506b4fa1-52be-4818-baaf-301c08897122] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004899236s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-446861 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-446861 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-446861 --alsologtostderr -v=1: (1.028854512s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-446861 -n no-preload-446861
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-446861 -n no-preload-446861: exit status 2 (422.480805ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-446861 -n no-preload-446861
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-446861 -n no-preload-446861: exit status 2 (406.093451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-446861 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-446861 -n no-preload-446861
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-446861 -n no-preload-446861
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-175432 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-175432 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (35.856717652s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fkbvm" [506b4fa1-52be-4818-baaf-301c08897122] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005697848s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-070208 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-070208 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-070208 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-070208 --alsologtostderr -v=1: (1.064346684s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070208 -n embed-certs-070208
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070208 -n embed-certs-070208: exit status 2 (412.008754ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070208 -n embed-certs-070208
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070208 -n embed-certs-070208: exit status 2 (370.944856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-070208 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070208 -n embed-certs-070208
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070208 -n embed-certs-070208
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-86p6z" [1fdfe72d-434d-4eaf-901a-3fe61a51d5ca] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-86p6z" [1fdfe72d-434d-4eaf-901a-3fe61a51d5ca] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.004206195s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-175432 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-175432 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.068447055s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-175432 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-175432 --alsologtostderr -v=3: (1.578872343s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-175432 -n newest-cni-175432
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-175432 -n newest-cni-175432: exit status 7 (119.220685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-175432 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-175432 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-175432 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (26.905037907s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-175432 -n newest-cni-175432
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-86p6z" [1fdfe72d-434d-4eaf-901a-3fe61a51d5ca] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004368467s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-365705 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-365705 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-365705 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-365705 -n default-k8s-diff-port-365705
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-365705 -n default-k8s-diff-port-365705: exit status 2 (328.832782ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-365705 -n default-k8s-diff-port-365705
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-365705 -n default-k8s-diff-port-365705: exit status 2 (321.792621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-365705 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-365705 -n default-k8s-diff-port-365705
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-365705 -n default-k8s-diff-port-365705
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-175432 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-175432 --alsologtostderr -v=1
E1226 22:30:16.363291   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/calico-546972/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-175432 -n newest-cni-175432
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-175432 -n newest-cni-175432: exit status 2 (333.444316ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-175432 -n newest-cni-175432
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-175432 -n newest-cni-175432: exit status 2 (329.547814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-175432 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-175432 -n newest-cni-175432
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-175432 -n newest-cni-175432
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-fgzm2" [3343832c-7bc7-41f2-acef-d48dc2e1c3c1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004415757s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-fgzm2" [3343832c-7bc7-41f2-acef-d48dc2e1c3c1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003635672s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-884510 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-884510 image list --format=json
E1226 22:31:07.005074   13976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-7214/.minikube/profiles/enable-default-cni-546972/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-884510 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-884510 -n old-k8s-version-884510
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-884510 -n old-k8s-version-884510: exit status 2 (331.710465ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-884510 -n old-k8s-version-884510
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-884510 -n old-k8s-version-884510: exit status 2 (325.264824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-884510 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-884510 -n old-k8s-version-884510
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-884510 -n old-k8s-version-884510
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    

Test skip (27/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-546972 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-546972" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-546972

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546972"

                                                
                                                
----------------------- debugLogs end: kubenet-546972 [took: 3.963616361s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-546972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-546972
--- SKIP: TestNetworkPlugins/group/kubenet (4.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-546972 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-546972" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-546972

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-546972" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546972"

                                                
                                                
----------------------- debugLogs end: cilium-546972 [took: 4.095839277s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-546972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-546972
--- SKIP: TestNetworkPlugins/group/cilium (4.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-403966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-403966
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard