Test Report: Docker_Linux_crio 17953

                    
                      eb30bbcea83871e91962f38accf20a5558557b42:2024-01-15:32709
                    
                

Test fail (3/320)

Order failed test Duration
39 TestAddons/parallel/Ingress 152.1
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 176.37
221 TestMultiNode/serial/PingHostFrom2Pods 2.9
x
+
TestAddons/parallel/Ingress (152.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-154292 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-154292 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-154292 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fd57d633-79f3-4e03-ae63-05b07f9c6633] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fd57d633-79f3-4e03-ae63-05b07f9c6633] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003285417s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-154292 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.912475994s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-154292 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-154292 addons disable ingress-dns --alsologtostderr -v=1: (1.479498424s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-154292 addons disable ingress --alsologtostderr -v=1: (7.62745859s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-154292
helpers_test.go:235: (dbg) docker inspect addons-154292:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3da5e64e852e1ecd7c0138bbb459069368c9aa7af3b85743a24b0af83ff477e3",
	        "Created": "2024-01-15T09:27:15.591891367Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13836,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T09:27:15.868910904Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/3da5e64e852e1ecd7c0138bbb459069368c9aa7af3b85743a24b0af83ff477e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3da5e64e852e1ecd7c0138bbb459069368c9aa7af3b85743a24b0af83ff477e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/3da5e64e852e1ecd7c0138bbb459069368c9aa7af3b85743a24b0af83ff477e3/hosts",
	        "LogPath": "/var/lib/docker/containers/3da5e64e852e1ecd7c0138bbb459069368c9aa7af3b85743a24b0af83ff477e3/3da5e64e852e1ecd7c0138bbb459069368c9aa7af3b85743a24b0af83ff477e3-json.log",
	        "Name": "/addons-154292",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-154292:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-154292",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/915078238f5ccf91426a9f08582a0b804bd3aaae44edc3367fd37cbeddb7a56f-init/diff:/var/lib/docker/overlay2/d9ef098e29db67903afbff93fb25a8f837156cdbfdd0e74ced52d24f8de7a26c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/915078238f5ccf91426a9f08582a0b804bd3aaae44edc3367fd37cbeddb7a56f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/915078238f5ccf91426a9f08582a0b804bd3aaae44edc3367fd37cbeddb7a56f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/915078238f5ccf91426a9f08582a0b804bd3aaae44edc3367fd37cbeddb7a56f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-154292",
	                "Source": "/var/lib/docker/volumes/addons-154292/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-154292",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-154292",
	                "name.minikube.sigs.k8s.io": "addons-154292",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d4b1c29fbcd6f203230469b3c2947183fa97fe3700efdbe43672aa532c36f115",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d4b1c29fbcd6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-154292": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3da5e64e852e",
	                        "addons-154292"
	                    ],
	                    "NetworkID": "0f329170300fe22860ddb55ca98dad7278cfdcb86df040abb85a20308a87deae",
	                    "EndpointID": "97f5a467e8710564ec15a3f9c7b85884b41e0527a5563b7cf05c512c4e13de12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-154292 -n addons-154292
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-154292 logs -n 25: (1.215050151s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-598232                                                                     | download-only-598232   | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| delete  | -p download-only-567794                                                                     | download-only-567794   | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| start   | --download-only -p                                                                          | download-docker-834376 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | download-docker-834376                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-834376                                                                   | download-docker-834376 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-488440   | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | binary-mirror-488440                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43641                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-488440                                                                     | binary-mirror-488440   | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| addons  | disable dashboard -p                                                                        | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | addons-154292                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | addons-154292                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-154292 --wait=true                                                                | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | -p addons-154292                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | addons-154292                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | addons-154292                                                                               |                        |         |         |                     |                     |
	| ip      | addons-154292 ip                                                                            | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	| addons  | addons-154292 addons disable                                                                | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-154292 addons                                                                        | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-154292 addons disable                                                                | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | -p addons-154292                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-154292 ssh curl -s                                                                   | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ssh     | addons-154292 ssh cat                                                                       | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:29 UTC |
	|         | /opt/local-path-provisioner/pvc-f7279b53-de25-4edb-8917-9e502cb81cfd_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-154292 addons disable                                                                | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:29 UTC | 15 Jan 24 09:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-154292 addons                                                                        | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:30 UTC | 15 Jan 24 09:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-154292 addons                                                                        | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:30 UTC | 15 Jan 24 09:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-154292 ip                                                                            | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:31 UTC | 15 Jan 24 09:31 UTC |
	| addons  | addons-154292 addons disable                                                                | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:31 UTC | 15 Jan 24 09:31 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-154292 addons disable                                                                | addons-154292          | jenkins | v1.32.0 | 15 Jan 24 09:31 UTC | 15 Jan 24 09:31 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:26:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:26:51.307631   13224 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:26:51.307749   13224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:51.307759   13224 out.go:309] Setting ErrFile to fd 2...
	I0115 09:26:51.307765   13224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:51.307968   13224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	I0115 09:26:51.308584   13224 out.go:303] Setting JSON to false
	I0115 09:26:51.309583   13224 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":561,"bootTime":1705310250,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:26:51.309653   13224 start.go:138] virtualization: kvm guest
	I0115 09:26:51.312274   13224 out.go:177] * [addons-154292] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:26:51.313913   13224 notify.go:220] Checking for updates...
	I0115 09:26:51.313924   13224 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:26:51.316608   13224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:26:51.318299   13224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:26:51.319722   13224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	I0115 09:26:51.321153   13224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:26:51.322592   13224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:26:51.324043   13224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:26:51.344136   13224 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 09:26:51.344230   13224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:26:51.392407   13224 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 09:26:51.384000484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:26:51.392510   13224 docker.go:295] overlay module found
	I0115 09:26:51.395213   13224 out.go:177] * Using the docker driver based on user configuration
	I0115 09:26:51.396679   13224 start.go:298] selected driver: docker
	I0115 09:26:51.396691   13224 start.go:902] validating driver "docker" against <nil>
	I0115 09:26:51.396704   13224 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:26:51.397525   13224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:26:51.449027   13224 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 09:26:51.440643039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:26:51.449199   13224 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:26:51.449660   13224 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 09:26:51.451910   13224 out.go:177] * Using Docker driver with root privileges
	I0115 09:26:51.453629   13224 cni.go:84] Creating CNI manager for ""
	I0115 09:26:51.453657   13224 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 09:26:51.453672   13224 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 09:26:51.453699   13224 start_flags.go:321] config:
	{Name:addons-154292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-154292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:26:51.455269   13224 out.go:177] * Starting control plane node addons-154292 in cluster addons-154292
	I0115 09:26:51.456792   13224 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 09:26:51.458306   13224 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 09:26:51.459660   13224 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:26:51.459697   13224 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 09:26:51.459705   13224 cache.go:56] Caching tarball of preloaded images
	I0115 09:26:51.459762   13224 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 09:26:51.459786   13224 preload.go:174] Found /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 09:26:51.459797   13224 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 09:26:51.460106   13224 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/config.json ...
	I0115 09:26:51.460137   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/config.json: {Name:mk7eab7028b4f1459f92224b8053a2ab4066b88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:26:51.475345   13224 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 09:26:51.475429   13224 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 09:26:51.475444   13224 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 09:26:51.475448   13224 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 09:26:51.475455   13224 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 09:26:51.475462   13224 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0115 09:27:02.744753   13224 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0115 09:27:02.744787   13224 cache.go:194] Successfully downloaded all kic artifacts
	I0115 09:27:02.744822   13224 start.go:365] acquiring machines lock for addons-154292: {Name:mkb27e691e5e285b804f4deb92a1c6e88bb60303 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:27:02.744947   13224 start.go:369] acquired machines lock for "addons-154292" in 102.104µs
	I0115 09:27:02.744972   13224 start.go:93] Provisioning new machine with config: &{Name:addons-154292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-154292 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:27:02.745090   13224 start.go:125] createHost starting for "" (driver="docker")
	I0115 09:27:02.747292   13224 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0115 09:27:02.747588   13224 start.go:159] libmachine.API.Create for "addons-154292" (driver="docker")
	I0115 09:27:02.747621   13224 client.go:168] LocalClient.Create starting
	I0115 09:27:02.747758   13224 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem
	I0115 09:27:02.860076   13224 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem
	I0115 09:27:03.034865   13224 cli_runner.go:164] Run: docker network inspect addons-154292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 09:27:03.050273   13224 cli_runner.go:211] docker network inspect addons-154292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 09:27:03.050340   13224 network_create.go:281] running [docker network inspect addons-154292] to gather additional debugging logs...
	I0115 09:27:03.050360   13224 cli_runner.go:164] Run: docker network inspect addons-154292
	W0115 09:27:03.066682   13224 cli_runner.go:211] docker network inspect addons-154292 returned with exit code 1
	I0115 09:27:03.066710   13224 network_create.go:284] error running [docker network inspect addons-154292]: docker network inspect addons-154292: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-154292 not found
	I0115 09:27:03.066721   13224 network_create.go:286] output of [docker network inspect addons-154292]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-154292 not found
	
	** /stderr **
	I0115 09:27:03.066824   13224 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 09:27:03.082448   13224 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002554c60}
	I0115 09:27:03.082496   13224 network_create.go:124] attempt to create docker network addons-154292 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0115 09:27:03.082561   13224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-154292 addons-154292
	I0115 09:27:03.134099   13224 network_create.go:108] docker network addons-154292 192.168.49.0/24 created
	I0115 09:27:03.134131   13224 kic.go:121] calculated static IP "192.168.49.2" for the "addons-154292" container
	I0115 09:27:03.134204   13224 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 09:27:03.149086   13224 cli_runner.go:164] Run: docker volume create addons-154292 --label name.minikube.sigs.k8s.io=addons-154292 --label created_by.minikube.sigs.k8s.io=true
	I0115 09:27:03.165581   13224 oci.go:103] Successfully created a docker volume addons-154292
	I0115 09:27:03.165649   13224 cli_runner.go:164] Run: docker run --rm --name addons-154292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-154292 --entrypoint /usr/bin/test -v addons-154292:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 09:27:10.365489   13224 cli_runner.go:217] Completed: docker run --rm --name addons-154292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-154292 --entrypoint /usr/bin/test -v addons-154292:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (7.19980035s)
	I0115 09:27:10.365518   13224 oci.go:107] Successfully prepared a docker volume addons-154292
	I0115 09:27:10.365529   13224 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:27:10.365561   13224 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 09:27:10.365627   13224 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-154292:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 09:27:15.523408   13224 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-154292:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.157738541s)
	I0115 09:27:15.523436   13224 kic.go:203] duration metric: took 5.157887 seconds to extract preloaded images to volume
	W0115 09:27:15.523574   13224 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 09:27:15.523670   13224 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 09:27:15.578039   13224 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-154292 --name addons-154292 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-154292 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-154292 --network addons-154292 --ip 192.168.49.2 --volume addons-154292:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 09:27:15.876840   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Running}}
	I0115 09:27:15.893869   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:15.911697   13224 cli_runner.go:164] Run: docker exec addons-154292 stat /var/lib/dpkg/alternatives/iptables
	I0115 09:27:15.957190   13224 oci.go:144] the created container "addons-154292" has a running status.
	I0115 09:27:15.957225   13224 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa...
	I0115 09:27:16.182983   13224 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 09:27:16.204635   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:16.223864   13224 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 09:27:16.223894   13224 kic_runner.go:114] Args: [docker exec --privileged addons-154292 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 09:27:16.275365   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:16.292973   13224 machine.go:88] provisioning docker machine ...
	I0115 09:27:16.293013   13224 ubuntu.go:169] provisioning hostname "addons-154292"
	I0115 09:27:16.293070   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:16.308684   13224 main.go:141] libmachine: Using SSH client type: native
	I0115 09:27:16.311501   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0115 09:27:16.311528   13224 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-154292 && echo "addons-154292" | sudo tee /etc/hostname
	I0115 09:27:16.515137   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-154292
	
	I0115 09:27:16.515235   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:16.532666   13224 main.go:141] libmachine: Using SSH client type: native
	I0115 09:27:16.532999   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0115 09:27:16.533017   13224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-154292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-154292/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-154292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 09:27:16.664971   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:27:16.665020   13224 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17953-3696/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-3696/.minikube}
	I0115 09:27:16.665068   13224 ubuntu.go:177] setting up certificates
	I0115 09:27:16.665085   13224 provision.go:83] configureAuth start
	I0115 09:27:16.665167   13224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-154292
	I0115 09:27:16.681587   13224 provision.go:138] copyHostCerts
	I0115 09:27:16.681663   13224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem (1082 bytes)
	I0115 09:27:16.681801   13224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem (1123 bytes)
	I0115 09:27:16.681891   13224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem (1679 bytes)
	I0115 09:27:16.681966   13224 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem org=jenkins.addons-154292 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-154292]
	I0115 09:27:16.835776   13224 provision.go:172] copyRemoteCerts
	I0115 09:27:16.835834   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 09:27:16.835867   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:16.852327   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:16.945716   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 09:27:16.967018   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 09:27:16.988010   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0115 09:27:17.008889   13224 provision.go:86] duration metric: configureAuth took 343.785867ms
	I0115 09:27:17.008923   13224 ubuntu.go:193] setting minikube options for container-runtime
	I0115 09:27:17.009115   13224 config.go:182] Loaded profile config "addons-154292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:27:17.009235   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:17.025612   13224 main.go:141] libmachine: Using SSH client type: native
	I0115 09:27:17.025941   13224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0115 09:27:17.025957   13224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 09:27:17.243473   13224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 09:27:17.243506   13224 machine.go:91] provisioned docker machine in 950.503229ms
	I0115 09:27:17.243516   13224 client.go:171] LocalClient.Create took 14.495886241s
	I0115 09:27:17.243528   13224 start.go:167] duration metric: libmachine.API.Create for "addons-154292" took 14.495940976s
	I0115 09:27:17.243536   13224 start.go:300] post-start starting for "addons-154292" (driver="docker")
	I0115 09:27:17.243548   13224 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 09:27:17.243608   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 09:27:17.243649   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:17.259349   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:17.353373   13224 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 09:27:17.356460   13224 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 09:27:17.356490   13224 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 09:27:17.356502   13224 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 09:27:17.356510   13224 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 09:27:17.356526   13224 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-3696/.minikube/addons for local assets ...
	I0115 09:27:17.356582   13224 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-3696/.minikube/files for local assets ...
	I0115 09:27:17.356605   13224 start.go:303] post-start completed in 113.062611ms
	I0115 09:27:17.356900   13224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-154292
	I0115 09:27:17.373152   13224 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/config.json ...
	I0115 09:27:17.373445   13224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 09:27:17.373504   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:17.389547   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:17.481561   13224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 09:27:17.485817   13224 start.go:128] duration metric: createHost completed in 14.740684695s
	I0115 09:27:17.485857   13224 start.go:83] releasing machines lock for "addons-154292", held for 14.740894468s
	I0115 09:27:17.485936   13224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-154292
	I0115 09:27:17.502615   13224 ssh_runner.go:195] Run: cat /version.json
	I0115 09:27:17.502669   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:17.502735   13224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 09:27:17.502801   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:17.519468   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:17.520992   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:17.699452   13224 ssh_runner.go:195] Run: systemctl --version
	I0115 09:27:17.703656   13224 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 09:27:17.840414   13224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 09:27:17.844579   13224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:27:17.866839   13224 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0115 09:27:17.866944   13224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:27:17.893058   13224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 09:27:17.893086   13224 start.go:475] detecting cgroup driver to use...
	I0115 09:27:17.893142   13224 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 09:27:17.893188   13224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 09:27:17.906140   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 09:27:17.916149   13224 docker.go:217] disabling cri-docker service (if available) ...
	I0115 09:27:17.916198   13224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 09:27:17.927843   13224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 09:27:17.940313   13224 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 09:27:18.020640   13224 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 09:27:18.101472   13224 docker.go:233] disabling docker service ...
	I0115 09:27:18.101527   13224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 09:27:18.118640   13224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 09:27:18.128842   13224 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 09:27:18.201020   13224 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 09:27:18.288763   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 09:27:18.298666   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 09:27:18.312279   13224 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 09:27:18.312338   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:27:18.320357   13224 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 09:27:18.320426   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:27:18.328537   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:27:18.336514   13224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:27:18.344839   13224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 09:27:18.352772   13224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 09:27:18.359803   13224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 09:27:18.366925   13224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 09:27:18.436014   13224 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 09:27:18.541540   13224 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 09:27:18.541616   13224 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 09:27:18.544838   13224 start.go:543] Will wait 60s for crictl version
	I0115 09:27:18.544878   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:27:18.547873   13224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 09:27:18.579427   13224 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0115 09:27:18.579538   13224 ssh_runner.go:195] Run: crio --version
	I0115 09:27:18.612361   13224 ssh_runner.go:195] Run: crio --version
	I0115 09:27:18.645978   13224 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0115 09:27:18.647457   13224 cli_runner.go:164] Run: docker network inspect addons-154292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 09:27:18.663083   13224 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0115 09:27:18.666419   13224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:27:18.676402   13224 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:27:18.676454   13224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:27:18.729392   13224 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 09:27:18.729416   13224 crio.go:415] Images already preloaded, skipping extraction
	I0115 09:27:18.729456   13224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:27:18.759663   13224 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 09:27:18.759684   13224 cache_images.go:84] Images are preloaded, skipping loading
	I0115 09:27:18.759740   13224 ssh_runner.go:195] Run: crio config
	I0115 09:27:18.798904   13224 cni.go:84] Creating CNI manager for ""
	I0115 09:27:18.798924   13224 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 09:27:18.798939   13224 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 09:27:18.798957   13224 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-154292 NodeName:addons-154292 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 09:27:18.799083   13224 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-154292"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 09:27:18.799147   13224 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-154292 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-154292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 09:27:18.799193   13224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 09:27:18.806874   13224 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 09:27:18.806931   13224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 09:27:18.814715   13224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0115 09:27:18.829873   13224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 09:27:18.845460   13224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0115 09:27:18.861041   13224 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0115 09:27:18.864247   13224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:27:18.874279   13224 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292 for IP: 192.168.49.2
	I0115 09:27:18.874319   13224 certs.go:190] acquiring lock for shared ca certs: {Name:mk436e7b36fef987bcfd7cb65df7b354c02b1a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:18.874453   13224 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key
	I0115 09:27:19.103234   13224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt ...
	I0115 09:27:19.103268   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt: {Name:mk7a010753ac37526ee5dc18561830d09e0e3860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:19.103432   13224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key ...
	I0115 09:27:19.103443   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key: {Name:mk661e7e09d0972b3510544f6d72fc103911e594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:19.103510   13224 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key
	I0115 09:27:19.176724   13224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.crt ...
	I0115 09:27:19.176759   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.crt: {Name:mk0a4d663d097e6a1a71e011d141c3398f8124e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:19.176940   13224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key ...
	I0115 09:27:19.176952   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key: {Name:mkc3d8ead5aa3051672229d07a9ee6ff5ba1c192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:19.177068   13224 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.key
	I0115 09:27:19.177084   13224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt with IP's: []
	I0115 09:27:19.314260   13224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt ...
	I0115 09:27:19.314297   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: {Name:mk9c3ad34750a7225e78a3fe22c41f927c4f2719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:19.314481   13224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.key ...
	I0115 09:27:19.314493   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.key: {Name:mked72d39b49b45571905c4ae3ab5da10efaf126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:19.314575   13224 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.key.dd3b5fb2
	I0115 09:27:19.314594   13224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 09:27:19.424598   13224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.crt.dd3b5fb2 ...
	I0115 09:27:19.424632   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.crt.dd3b5fb2: {Name:mka07a2dad20d4545bbf621343a8f1c064caa252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:19.424803   13224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.key.dd3b5fb2 ...
	I0115 09:27:19.424819   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.key.dd3b5fb2: {Name:mk104ee9f97b7970aaa9c7a3f8c0a0418906fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:19.424892   13224 certs.go:337] copying /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.crt
	I0115 09:27:19.424966   13224 certs.go:341] copying /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.key
	I0115 09:27:19.425015   13224 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/proxy-client.key
	I0115 09:27:19.425031   13224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/proxy-client.crt with IP's: []
	I0115 09:27:19.565239   13224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/proxy-client.crt ...
	I0115 09:27:19.565274   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/proxy-client.crt: {Name:mke55540b86e73220f6010d44987c6d02e2eddfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:19.565436   13224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/proxy-client.key ...
	I0115 09:27:19.565448   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/proxy-client.key: {Name:mkb39c26ad54101e073b5460fefd15336b5de040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:19.565626   13224 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem (1675 bytes)
	I0115 09:27:19.565665   13224 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem (1082 bytes)
	I0115 09:27:19.565694   13224 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem (1123 bytes)
	I0115 09:27:19.565727   13224 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem (1679 bytes)
	I0115 09:27:19.566295   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 09:27:19.587615   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 09:27:19.607518   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 09:27:19.627853   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 09:27:19.648145   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 09:27:19.668513   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 09:27:19.689362   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 09:27:19.709734   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 09:27:19.729988   13224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 09:27:19.750486   13224 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 09:27:19.765744   13224 ssh_runner.go:195] Run: openssl version
	I0115 09:27:19.770558   13224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 09:27:19.778851   13224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:27:19.782016   13224 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:27:19.782075   13224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:27:19.788216   13224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 09:27:19.796243   13224 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 09:27:19.799183   13224 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:27:19.799227   13224 kubeadm.go:404] StartCluster: {Name:addons-154292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-154292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:27:19.799308   13224 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 09:27:19.799361   13224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 09:27:19.831541   13224 cri.go:89] found id: ""
	I0115 09:27:19.831607   13224 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 09:27:19.839711   13224 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 09:27:19.847674   13224 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 09:27:19.847730   13224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 09:27:19.855333   13224 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 09:27:19.855389   13224 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 09:27:19.898481   13224 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0115 09:27:19.898612   13224 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 09:27:19.931247   13224 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0115 09:27:19.931329   13224 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-gcp
	I0115 09:27:19.931379   13224 kubeadm.go:322] OS: Linux
	I0115 09:27:19.931467   13224 kubeadm.go:322] CGROUPS_CPU: enabled
	I0115 09:27:19.931560   13224 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0115 09:27:19.931616   13224 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0115 09:27:19.931673   13224 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0115 09:27:19.931734   13224 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0115 09:27:19.931785   13224 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0115 09:27:19.931824   13224 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0115 09:27:19.931864   13224 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0115 09:27:19.931913   13224 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0115 09:27:19.990826   13224 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 09:27:19.990957   13224 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 09:27:19.991092   13224 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 09:27:20.174042   13224 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 09:27:20.177738   13224 out.go:204]   - Generating certificates and keys ...
	I0115 09:27:20.177846   13224 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 09:27:20.177921   13224 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 09:27:20.247792   13224 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 09:27:20.353130   13224 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 09:27:20.460808   13224 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 09:27:20.544631   13224 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 09:27:21.098407   13224 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 09:27:21.098588   13224 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-154292 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 09:27:21.156337   13224 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 09:27:21.156484   13224 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-154292 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 09:27:21.273464   13224 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 09:27:21.408954   13224 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 09:27:21.572062   13224 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 09:27:21.572168   13224 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 09:27:21.622251   13224 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 09:27:21.681463   13224 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 09:27:21.832410   13224 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 09:27:22.014042   13224 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 09:27:22.014948   13224 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 09:27:22.017424   13224 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 09:27:22.019761   13224 out.go:204]   - Booting up control plane ...
	I0115 09:27:22.019857   13224 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 09:27:22.019940   13224 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 09:27:22.020295   13224 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 09:27:22.028355   13224 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 09:27:22.029192   13224 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 09:27:22.029255   13224 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 09:27:22.101734   13224 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 09:27:27.103557   13224 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002020 seconds
	I0115 09:27:27.103693   13224 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 09:27:27.114782   13224 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 09:27:27.633491   13224 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 09:27:27.633670   13224 kubeadm.go:322] [mark-control-plane] Marking the node addons-154292 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 09:27:28.144629   13224 kubeadm.go:322] [bootstrap-token] Using token: 27m71z.sl39cf1gebbhmjw7
	I0115 09:27:28.146384   13224 out.go:204]   - Configuring RBAC rules ...
	I0115 09:27:28.146531   13224 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 09:27:28.150775   13224 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 09:27:28.159614   13224 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 09:27:28.162684   13224 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 09:27:28.165625   13224 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 09:27:28.168732   13224 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 09:27:28.180395   13224 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 09:27:28.357999   13224 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 09:27:28.554552   13224 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 09:27:28.555847   13224 kubeadm.go:322] 
	I0115 09:27:28.555946   13224 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 09:27:28.555956   13224 kubeadm.go:322] 
	I0115 09:27:28.556037   13224 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 09:27:28.556056   13224 kubeadm.go:322] 
	I0115 09:27:28.556103   13224 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 09:27:28.556192   13224 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 09:27:28.556245   13224 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 09:27:28.556251   13224 kubeadm.go:322] 
	I0115 09:27:28.556294   13224 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0115 09:27:28.556300   13224 kubeadm.go:322] 
	I0115 09:27:28.556338   13224 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 09:27:28.556346   13224 kubeadm.go:322] 
	I0115 09:27:28.556404   13224 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 09:27:28.556515   13224 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 09:27:28.556618   13224 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 09:27:28.556629   13224 kubeadm.go:322] 
	I0115 09:27:28.556743   13224 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 09:27:28.556855   13224 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 09:27:28.556871   13224 kubeadm.go:322] 
	I0115 09:27:28.556994   13224 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 27m71z.sl39cf1gebbhmjw7 \
	I0115 09:27:28.557152   13224 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 \
	I0115 09:27:28.557203   13224 kubeadm.go:322] 	--control-plane 
	I0115 09:27:28.557218   13224 kubeadm.go:322] 
	I0115 09:27:28.557334   13224 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 09:27:28.557344   13224 kubeadm.go:322] 
	I0115 09:27:28.557452   13224 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 27m71z.sl39cf1gebbhmjw7 \
	I0115 09:27:28.557584   13224 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 
	I0115 09:27:28.559232   13224 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0115 09:27:28.559406   13224 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 09:27:28.559433   13224 cni.go:84] Creating CNI manager for ""
	I0115 09:27:28.559440   13224 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 09:27:28.561562   13224 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 09:27:28.563316   13224 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 09:27:28.567129   13224 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 09:27:28.567151   13224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 09:27:28.583007   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 09:27:29.215411   13224 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 09:27:29.215559   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:29.215559   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=addons-154292 minikube.k8s.io/updated_at=2024_01_15T09_27_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:29.222037   13224 ops.go:34] apiserver oom_adj: -16
	I0115 09:27:29.279935   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:29.780284   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:30.280845   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:30.780731   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:31.280520   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:31.780580   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:32.280571   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:32.780420   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:33.280198   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:33.780928   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:34.280159   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:34.780303   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:35.280691   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:35.780876   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:36.280321   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:36.780902   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:37.280757   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:37.780707   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:38.280352   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:38.780626   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:39.280335   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:39.780061   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:40.280193   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:40.780050   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:41.280524   13224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:27:41.390158   13224 kubeadm.go:1088] duration metric: took 12.174658889s to wait for elevateKubeSystemPrivileges.
	I0115 09:27:41.390196   13224 kubeadm.go:406] StartCluster complete in 21.590972575s
	I0115 09:27:41.390213   13224 settings.go:142] acquiring lock: {Name:mkbf6aded3b549fa4f3ab1cad294a9ebed536616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:41.390315   13224 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:27:41.390718   13224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/kubeconfig: {Name:mk31241d29ab70870dc379ecd59996acb9413d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:27:41.390897   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 09:27:41.390960   13224 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0115 09:27:41.391060   13224 addons.go:69] Setting ingress=true in profile "addons-154292"
	I0115 09:27:41.391067   13224 addons.go:69] Setting ingress-dns=true in profile "addons-154292"
	I0115 09:27:41.391083   13224 addons.go:234] Setting addon ingress-dns=true in "addons-154292"
	I0115 09:27:41.391091   13224 addons.go:234] Setting addon ingress=true in "addons-154292"
	I0115 09:27:41.391085   13224 addons.go:69] Setting default-storageclass=true in profile "addons-154292"
	I0115 09:27:41.391114   13224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-154292"
	I0115 09:27:41.391137   13224 config.go:182] Loaded profile config "addons-154292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:27:41.391146   13224 addons.go:69] Setting inspektor-gadget=true in profile "addons-154292"
	I0115 09:27:41.391147   13224 addons.go:69] Setting gcp-auth=true in profile "addons-154292"
	I0115 09:27:41.391156   13224 addons.go:234] Setting addon inspektor-gadget=true in "addons-154292"
	I0115 09:27:41.391165   13224 mustload.go:65] Loading cluster: addons-154292
	I0115 09:27:41.391141   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.391188   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.391141   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.391329   13224 addons.go:69] Setting registry=true in profile "addons-154292"
	I0115 09:27:41.391352   13224 addons.go:69] Setting helm-tiller=true in profile "addons-154292"
	I0115 09:27:41.391359   13224 addons.go:234] Setting addon registry=true in "addons-154292"
	I0115 09:27:41.391368   13224 addons.go:234] Setting addon helm-tiller=true in "addons-154292"
	I0115 09:27:41.391408   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.391418   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.391411   13224 addons.go:69] Setting metrics-server=true in profile "addons-154292"
	I0115 09:27:41.391427   13224 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-154292"
	I0115 09:27:41.391447   13224 addons.go:234] Setting addon metrics-server=true in "addons-154292"
	I0115 09:27:41.391455   13224 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-154292"
	I0115 09:27:41.391487   13224 addons.go:69] Setting storage-provisioner=true in profile "addons-154292"
	I0115 09:27:41.391504   13224 addons.go:234] Setting addon storage-provisioner=true in "addons-154292"
	I0115 09:27:41.391535   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.391721   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.391831   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.391940   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.391944   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.392086   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.392153   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.392298   13224 addons.go:69] Setting volumesnapshots=true in profile "addons-154292"
	I0115 09:27:41.392327   13224 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-154292"
	I0115 09:27:41.392357   13224 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-154292"
	I0115 09:27:41.392374   13224 addons.go:69] Setting cloud-spanner=true in profile "addons-154292"
	I0115 09:27:41.392396   13224 addons.go:234] Setting addon cloud-spanner=true in "addons-154292"
	I0115 09:27:41.391720   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.392469   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.392507   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.392357   13224 addons.go:234] Setting addon volumesnapshots=true in "addons-154292"
	I0115 09:27:41.393194   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.393621   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.392403   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.394147   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.391345   13224 config.go:182] Loaded profile config "addons-154292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:27:41.394819   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.392640   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.391060   13224 addons.go:69] Setting yakd=true in profile "addons-154292"
	I0115 09:27:41.395863   13224 addons.go:234] Setting addon yakd=true in "addons-154292"
	I0115 09:27:41.395910   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.392411   13224 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-154292"
	I0115 09:27:41.396887   13224 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-154292"
	I0115 09:27:41.396946   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.397472   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.392992   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.419395   13224 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0115 09:27:41.417201   13224 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-154292"
	I0115 09:27:41.417538   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.417695   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.422823   13224 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 09:27:41.424403   13224 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0115 09:27:41.429795   13224 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0115 09:27:41.429818   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0115 09:27:41.429885   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.424351   13224 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0115 09:27:41.431731   13224 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 09:27:41.422455   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.421235   13224 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 09:27:41.433247   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.435823   13224 out.go:177]   - Using image docker.io/registry:2.8.3
	I0115 09:27:41.434519   13224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:27:41.434755   13224 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 09:27:41.434769   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 09:27:41.435049   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.442397   13224 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0115 09:27:41.440811   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0115 09:27:41.440882   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.445885   13224 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0115 09:27:41.445932   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.449692   13224 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0115 09:27:41.448191   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0115 09:27:41.448247   13224 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:27:41.450854   13224 addons.go:234] Setting addon default-storageclass=true in "addons-154292"
	I0115 09:27:41.451990   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:41.452108   13224 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0115 09:27:41.452143   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0115 09:27:41.452220   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.452339   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.452507   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:41.452691   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 09:27:41.452736   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.459187   13224 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0115 09:27:41.460825   13224 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0115 09:27:41.462806   13224 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0115 09:27:41.462829   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0115 09:27:41.462890   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.469169   13224 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0115 09:27:41.471375   13224 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0115 09:27:41.471298   13224 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0115 09:27:41.471349   13224 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0115 09:27:41.475070   13224 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 09:27:41.475091   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0115 09:27:41.475159   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.473613   13224 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0115 09:27:41.477275   13224 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 09:27:41.477295   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0115 09:27:41.477374   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.485977   13224 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0115 09:27:41.487318   13224 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0115 09:27:41.497399   13224 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0115 09:27:41.497423   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0115 09:27:41.497487   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.500119   13224 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0115 09:27:41.502362   13224 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0115 09:27:41.503977   13224 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0115 09:27:41.503997   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0115 09:27:41.504046   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.502323   13224 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0115 09:27:41.502566   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.508563   13224 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0115 09:27:41.510111   13224 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0115 09:27:41.510129   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0115 09:27:41.510184   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.507757   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.513616   13224 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0115 09:27:41.514922   13224 out.go:177]   - Using image docker.io/busybox:stable
	I0115 09:27:41.516355   13224 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 09:27:41.516373   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0115 09:27:41.516421   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.519980   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.522489   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.524047   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.532320   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.540156   13224 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 09:27:41.540179   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 09:27:41.540222   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:41.546509   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.547803   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.563461   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.573187   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.581057   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.582336   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.582949   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.583464   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:41.753216   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 09:27:41.827231   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:27:41.829292   13224 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0115 09:27:41.829365   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0115 09:27:41.831144   13224 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0115 09:27:41.831173   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0115 09:27:41.834833   13224 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0115 09:27:41.834854   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0115 09:27:41.931539   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 09:27:41.934532   13224 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-154292" context rescaled to 1 replicas
	I0115 09:27:41.934573   13224 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:27:41.937525   13224 out.go:177] * Verifying Kubernetes components...
	I0115 09:27:41.936300   13224 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0115 09:27:41.938951   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:27:41.937554   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0115 09:27:41.946507   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 09:27:42.025827   13224 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0115 09:27:42.025882   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0115 09:27:42.045621   13224 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0115 09:27:42.045706   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0115 09:27:42.126637   13224 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0115 09:27:42.126718   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0115 09:27:42.128751   13224 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0115 09:27:42.128809   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0115 09:27:42.227336   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 09:27:42.326291   13224 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 09:27:42.326367   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0115 09:27:42.329226   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0115 09:27:42.335080   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0115 09:27:42.336761   13224 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0115 09:27:42.336819   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0115 09:27:42.346006   13224 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0115 09:27:42.346086   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0115 09:27:42.425850   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0115 09:27:42.427477   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 09:27:42.433710   13224 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0115 09:27:42.433739   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0115 09:27:42.440027   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 09:27:42.440339   13224 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0115 09:27:42.440374   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0115 09:27:42.527467   13224 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 09:27:42.527551   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 09:27:42.532084   13224 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0115 09:27:42.532167   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0115 09:27:42.738977   13224 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 09:27:42.739053   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 09:27:42.740523   13224 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0115 09:27:42.740575   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0115 09:27:42.829602   13224 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0115 09:27:42.829681   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0115 09:27:42.847922   13224 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0115 09:27:42.848007   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0115 09:27:43.127818   13224 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0115 09:27:43.127890   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0115 09:27:43.140621   13224 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0115 09:27:43.140715   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0115 09:27:43.333280   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 09:27:43.339401   13224 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0115 09:27:43.339437   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0115 09:27:43.346400   13224 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0115 09:27:43.346481   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0115 09:27:43.443044   13224 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0115 09:27:43.443075   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0115 09:27:43.533555   13224 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0115 09:27:43.533631   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0115 09:27:43.643609   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0115 09:27:43.725721   13224 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 09:27:43.725815   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0115 09:27:43.847662   13224 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 09:27:43.847690   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0115 09:27:43.938983   13224 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.185728115s)
	I0115 09:27:43.939017   13224 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0115 09:27:43.944746   13224 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0115 09:27:43.944774   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0115 09:27:44.133227   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 09:27:44.232528   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 09:27:44.436315   13224 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0115 09:27:44.436343   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0115 09:27:45.036196   13224 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0115 09:27:45.036289   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0115 09:27:45.147812   13224 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0115 09:27:45.147839   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0115 09:27:45.441291   13224 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0115 09:27:45.441321   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0115 09:27:45.844634   13224 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 09:27:45.844717   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0115 09:27:46.131637   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 09:27:46.826709   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.999374439s)
	I0115 09:27:48.248195   13224 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0115 09:27:48.248413   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:48.266785   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:48.335524   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.403946103s)
	I0115 09:27:48.335608   13224 addons.go:470] Verifying addon ingress=true in "addons-154292"
	I0115 09:27:48.335616   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.389048226s)
	I0115 09:27:48.335664   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.10823491s)
	I0115 09:27:48.337890   13224 out.go:177] * Verifying ingress addon...
	I0115 09:27:48.335542   13224 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.39652809s)
	I0115 09:27:48.335766   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.006455537s)
	I0115 09:27:48.335814   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.000645335s)
	I0115 09:27:48.335861   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.909927497s)
	I0115 09:27:48.335907   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.908354949s)
	I0115 09:27:48.335940   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.895840033s)
	I0115 09:27:48.336007   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.002698278s)
	I0115 09:27:48.336059   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.692369234s)
	I0115 09:27:48.336123   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.202802745s)
	I0115 09:27:48.336210   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.103583549s)
	W0115 09:27:48.337969   13224 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 09:27:48.337989   13224 addons.go:470] Verifying addon metrics-server=true in "addons-154292"
	I0115 09:27:48.338027   13224 retry.go:31] will retry after 214.03951ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 09:27:48.338050   13224 addons.go:470] Verifying addon registry=true in "addons-154292"
	I0115 09:27:48.339091   13224 node_ready.go:35] waiting up to 6m0s for node "addons-154292" to be "Ready" ...
	I0115 09:27:48.340544   13224 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0115 09:27:48.341550   13224 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-154292 service yakd-dashboard -n yakd-dashboard
	
	I0115 09:27:48.341482   13224 out.go:177] * Verifying registry addon...
	W0115 09:27:48.343558   13224 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0115 09:27:48.345451   13224 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0115 09:27:48.346890   13224 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0115 09:27:48.346909   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:48.348278   13224 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0115 09:27:48.348295   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:48.443167   13224 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0115 09:27:48.460093   13224 addons.go:234] Setting addon gcp-auth=true in "addons-154292"
	I0115 09:27:48.460157   13224 host.go:66] Checking if "addons-154292" exists ...
	I0115 09:27:48.460519   13224 cli_runner.go:164] Run: docker container inspect addons-154292 --format={{.State.Status}}
	I0115 09:27:48.477837   13224 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0115 09:27:48.477890   13224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154292
	I0115 09:27:48.496322   13224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/addons-154292/id_rsa Username:docker}
	I0115 09:27:48.554131   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 09:27:48.845499   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:48.848546   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:49.241508   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.109812199s)
	I0115 09:27:49.241553   13224 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-154292"
	I0115 09:27:49.243314   13224 out.go:177] * Verifying csi-hostpath-driver addon...
	I0115 09:27:49.245948   13224 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0115 09:27:49.256375   13224 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0115 09:27:49.256456   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:49.347201   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:49.351281   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:49.751168   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:49.845870   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:49.849199   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:50.335458   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:50.347247   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:27:50.347371   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:50.350912   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:50.555737   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.001534438s)
	I0115 09:27:50.555798   13224 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.077925341s)
	I0115 09:27:50.558174   13224 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0115 09:27:50.626780   13224 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 09:27:50.628310   13224 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0115 09:27:50.628453   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0115 09:27:50.651079   13224 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0115 09:27:50.651149   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0115 09:27:50.751492   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:50.826696   13224 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 09:27:50.826778   13224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0115 09:27:50.849988   13224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 09:27:50.851672   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:50.853213   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:51.251734   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:51.346256   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:51.350509   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:51.753148   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:51.846490   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:51.849003   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:52.251645   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:52.347797   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:27:52.348102   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:52.353526   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:52.650517   13224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.800434263s)
	I0115 09:27:52.651560   13224 addons.go:470] Verifying addon gcp-auth=true in "addons-154292"
	I0115 09:27:52.654699   13224 out.go:177] * Verifying gcp-auth addon...
	I0115 09:27:52.657233   13224 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0115 09:27:52.660208   13224 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0115 09:27:52.660232   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:52.752364   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:52.845936   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:52.849380   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:53.161401   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:53.249777   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:53.346042   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:53.348973   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:53.661019   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:53.750584   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:53.845408   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:53.848958   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:54.163724   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:54.250470   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:54.345825   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:54.348671   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:54.661192   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:54.750642   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:54.845069   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:27:54.845650   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:54.848551   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:55.160470   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:55.249989   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:55.345039   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:55.349509   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:55.660253   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:55.750481   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:55.845388   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:55.848624   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:56.160607   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:56.249930   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:56.345237   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:56.348509   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:56.661262   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:56.749574   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:56.845401   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:56.848419   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:57.160228   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:57.249690   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:57.345423   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:27:57.345889   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:57.348489   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:57.660038   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:57.750310   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:57.845203   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:57.848531   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:58.160224   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:58.249462   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:58.345840   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:58.348909   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:58.660499   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:58.749799   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:58.845902   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:58.848603   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:59.160737   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:59.249957   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:59.344796   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:59.349388   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:27:59.660209   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:27:59.749559   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:27:59.845201   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:27:59.845530   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:27:59.848333   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:00.160971   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:00.250454   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:00.345299   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:00.348540   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:00.660627   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:00.749974   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:00.844978   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:00.849629   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:01.160696   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:01.250152   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:01.345711   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:01.348548   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:01.660910   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:01.751475   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:01.845490   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:01.848456   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:02.160382   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:02.249729   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:02.345126   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:28:02.345847   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:02.348482   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:02.659959   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:02.750456   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:02.845257   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:02.848585   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:03.160667   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:03.250198   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:03.345081   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:03.348128   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:03.660836   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:03.750209   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:03.845689   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:03.848869   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:04.160782   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:04.250291   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:04.345399   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:04.348668   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:04.660766   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:04.750003   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:04.844667   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:28:04.845232   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:04.848445   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:05.160104   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:05.250726   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:05.345803   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:05.348491   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:05.660461   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:05.750104   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:05.845607   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:05.848500   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:06.160519   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:06.250138   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:06.345508   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:06.348252   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:06.661156   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:06.750497   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:06.845364   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:28:06.845833   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:06.848493   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:07.160473   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:07.249974   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:07.344954   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:07.349341   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:07.660445   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:07.749775   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:07.845635   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:07.848466   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:08.160476   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:08.250048   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:08.345173   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:08.348393   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:08.660832   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:08.750202   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:08.845069   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:08.848508   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:09.160533   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:09.250152   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:09.344604   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:28:09.344993   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:09.349499   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:09.660296   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:09.749772   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:09.846050   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:09.848971   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:10.177533   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:10.249794   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:10.345892   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:10.348486   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:10.660194   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:10.749638   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:10.846046   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:10.848554   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:11.160676   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:11.250049   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:11.345179   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:28:11.345647   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:11.348646   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:11.660997   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:11.750480   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:11.845898   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:11.848793   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:12.160750   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:12.250275   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:12.345464   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:12.348615   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:12.660355   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:12.749837   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:12.845249   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:12.849551   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:13.160568   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:13.250439   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:13.345015   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:13.349423   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:13.660122   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:13.750662   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:13.845128   13224 node_ready.go:58] node "addons-154292" has status "Ready":"False"
	I0115 09:28:13.845720   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:13.848539   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:14.160559   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:14.250126   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:14.345516   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:14.348231   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:14.661024   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:14.750648   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:14.845643   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:14.848428   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:15.160028   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:15.251136   13224 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0115 09:28:15.251160   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:15.350411   13224 node_ready.go:49] node "addons-154292" has status "Ready":"True"
	I0115 09:28:15.350490   13224 node_ready.go:38] duration metric: took 27.008826148s waiting for node "addons-154292" to be "Ready" ...
	I0115 09:28:15.350509   13224 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:28:15.351215   13224 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0115 09:28:15.351279   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:15.352347   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:15.360925   13224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pnv7x" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:15.660302   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:15.752314   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:15.848573   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:15.850532   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:16.160575   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:16.252756   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:16.345973   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:16.350531   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:16.661283   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:16.752108   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:16.845973   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:16.849865   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:16.867462   13224 pod_ready.go:92] pod "coredns-5dd5756b68-pnv7x" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:16.867492   13224 pod_ready.go:81] duration metric: took 1.506537046s waiting for pod "coredns-5dd5756b68-pnv7x" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:16.867523   13224 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-154292" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:16.872401   13224 pod_ready.go:92] pod "etcd-addons-154292" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:16.872427   13224 pod_ready.go:81] duration metric: took 4.895826ms waiting for pod "etcd-addons-154292" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:16.872444   13224 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-154292" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:16.877136   13224 pod_ready.go:92] pod "kube-apiserver-addons-154292" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:16.877153   13224 pod_ready.go:81] duration metric: took 4.701212ms waiting for pod "kube-apiserver-addons-154292" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:16.877162   13224 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-154292" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:16.882057   13224 pod_ready.go:92] pod "kube-controller-manager-addons-154292" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:16.882077   13224 pod_ready.go:81] duration metric: took 4.908747ms waiting for pod "kube-controller-manager-addons-154292" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:16.882090   13224 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p8h22" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:16.945389   13224 pod_ready.go:92] pod "kube-proxy-p8h22" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:16.945423   13224 pod_ready.go:81] duration metric: took 63.325612ms waiting for pod "kube-proxy-p8h22" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:16.945437   13224 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-154292" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:17.160514   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:17.251004   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:17.345479   13224 pod_ready.go:92] pod "kube-scheduler-addons-154292" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:17.345500   13224 pod_ready.go:81] duration metric: took 400.055352ms waiting for pod "kube-scheduler-addons-154292" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:17.345509   13224 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:17.345692   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:17.348653   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:17.660455   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:17.750912   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:17.845912   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:17.849905   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:18.161078   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:18.251564   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:18.345528   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:18.349483   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:18.660583   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:18.751475   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:18.845772   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:18.849826   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:19.243892   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:19.331924   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:19.349243   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:19.427508   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:19.436747   13224 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:19.661432   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:19.752429   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:19.847010   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:19.853365   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:20.161087   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:20.253897   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:20.345680   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:20.349386   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:20.660796   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:20.751732   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:20.846085   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:20.850436   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:21.162014   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:21.253151   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:21.346067   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:21.350069   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:21.660791   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:21.751346   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:21.846394   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:21.849320   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:21.850613   13224 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:22.161686   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:22.251964   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:22.346699   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:22.351212   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:22.662389   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:22.752334   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:22.846895   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:22.849609   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:23.161195   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:23.252248   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:23.346941   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:23.351169   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:23.661479   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:23.750591   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:23.846279   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:23.850676   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:23.852608   13224 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:24.160953   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:24.252090   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:24.346009   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:24.349849   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:24.661165   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:24.752000   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:24.857488   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:24.859562   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:25.161305   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:25.251760   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:25.345496   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:25.349119   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:25.661363   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:25.751878   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:25.846935   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:25.849815   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:25.853496   13224 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:26.161472   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:26.251716   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:26.346417   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:26.355946   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:26.661440   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:26.751670   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:26.846584   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:26.849760   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:27.160878   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:27.251643   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:27.346971   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:27.349804   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:27.661311   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:27.752441   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:27.846402   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:27.850327   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:27.853563   13224 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:28.160728   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:28.251991   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:28.346250   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:28.351128   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:28.729344   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:28.754063   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:28.845585   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:28.849637   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:29.161740   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:29.252803   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:29.347582   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:29.349835   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:29.661393   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:29.751112   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:29.846087   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:29.850267   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:30.161453   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:30.251059   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:30.345871   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:30.350127   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:30.351375   13224 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:30.661430   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:30.750945   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:30.845985   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:30.850107   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:31.160413   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:31.250873   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:31.345956   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:31.349698   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:31.661698   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:31.750899   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:31.847901   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:31.850319   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:32.160631   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:32.251228   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:32.352429   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:32.354319   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:32.355599   13224 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:32.727220   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:32.751880   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:32.846441   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:32.850916   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:33.161762   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:33.251416   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:33.345233   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:33.349597   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:33.660836   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:33.751867   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:33.849069   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:33.850849   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:34.161384   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:34.252390   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:34.345335   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:34.349863   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:34.661009   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:34.751237   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:34.847215   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:34.849738   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:34.850929   13224 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:35.161006   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:35.251624   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:35.345364   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:35.349420   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:35.661276   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:35.754686   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:35.845852   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:35.850231   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:36.161141   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:36.264984   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:36.346036   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:36.350267   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:36.660962   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:36.752172   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:36.845417   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:36.849516   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:37.161449   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:37.251299   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:37.349720   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:37.350086   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:37.351500   13224 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:37.661994   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:37.751703   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:37.845829   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:37.849755   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:38.161662   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:38.251146   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:38.345728   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:38.349751   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:38.350481   13224 pod_ready.go:92] pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:38.350498   13224 pod_ready.go:81] duration metric: took 21.004982693s waiting for pod "metrics-server-7c66d45ddc-7q98p" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:38.350508   13224 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jbhds" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:38.660947   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:38.751169   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:38.846478   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:38.850237   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:39.160464   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:39.251378   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:39.345188   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:39.350582   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:39.661191   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:39.751262   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:39.846394   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:39.850102   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:40.161481   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:40.251085   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:40.346814   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:40.349735   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:40.356134   13224 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jbhds" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:40.662168   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:40.752185   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:40.846248   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:40.850274   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:41.161839   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:41.251919   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:41.346224   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:41.350546   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:41.660686   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:41.751657   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:41.846304   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:41.850878   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:42.160951   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:42.251819   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:42.346034   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:42.349597   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:42.660837   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:42.751949   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:42.846617   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:42.849708   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:42.856665   13224 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jbhds" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:43.160291   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:43.250676   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:43.345271   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:43.350144   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:43.661039   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:43.751730   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:43.846770   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:43.850479   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 09:28:44.160668   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:44.251018   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:44.345875   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:44.350742   13224 kapi.go:107] duration metric: took 56.005287846s to wait for kubernetes.io/minikube-addons=registry ...
	I0115 09:28:44.660693   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:44.751547   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:44.846053   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:45.161165   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:45.251797   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:45.345962   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:45.356414   13224 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jbhds" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:45.734013   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:45.752210   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:45.847488   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:46.229466   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:46.252904   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:46.346192   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:46.661435   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:46.750987   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:46.846563   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:47.161886   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:47.253205   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:47.348264   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:47.358043   13224 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jbhds" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:47.661531   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:47.751448   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:47.846820   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:48.161479   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:48.252737   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:48.346428   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:48.661346   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:48.753416   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:48.845747   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:49.160761   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:49.251914   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:49.346815   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:49.661545   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:49.751811   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:49.846689   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:49.856502   13224 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jbhds" in "kube-system" namespace has status "Ready":"False"
	I0115 09:28:50.161491   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:50.252124   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:50.345473   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:50.662096   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:50.752950   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:50.846860   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:51.228706   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:51.253164   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:51.454557   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:51.462352   13224 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jbhds" in "kube-system" namespace has status "Ready":"True"
	I0115 09:28:51.462379   13224 pod_ready.go:81] duration metric: took 13.111863563s waiting for pod "nvidia-device-plugin-daemonset-jbhds" in "kube-system" namespace to be "Ready" ...
	I0115 09:28:51.462405   13224 pod_ready.go:38] duration metric: took 36.111879919s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:28:51.462428   13224 api_server.go:52] waiting for apiserver process to appear ...
	I0115 09:28:51.462466   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 09:28:51.462519   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 09:28:51.555442   13224 cri.go:89] found id: "5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113"
	I0115 09:28:51.555466   13224 cri.go:89] found id: ""
	I0115 09:28:51.555476   13224 logs.go:284] 1 containers: [5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113]
	I0115 09:28:51.555528   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:51.559307   13224 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 09:28:51.559371   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 09:28:51.728487   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:51.746205   13224 cri.go:89] found id: "113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10"
	I0115 09:28:51.746229   13224 cri.go:89] found id: ""
	I0115 09:28:51.746238   13224 logs.go:284] 1 containers: [113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10]
	I0115 09:28:51.746284   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:51.750774   13224 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 09:28:51.750836   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 09:28:51.752260   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:51.846629   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:51.943236   13224 cri.go:89] found id: "5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4"
	I0115 09:28:51.943262   13224 cri.go:89] found id: ""
	I0115 09:28:51.943273   13224 logs.go:284] 1 containers: [5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4]
	I0115 09:28:51.943327   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:51.947893   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 09:28:51.947959   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 09:28:52.151058   13224 cri.go:89] found id: "2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb"
	I0115 09:28:52.151125   13224 cri.go:89] found id: ""
	I0115 09:28:52.151139   13224 logs.go:284] 1 containers: [2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb]
	I0115 09:28:52.151200   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:52.225961   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 09:28:52.226090   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 09:28:52.230354   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:52.251953   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:52.428263   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:52.429999   13224 cri.go:89] found id: "93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa"
	I0115 09:28:52.430089   13224 cri.go:89] found id: ""
	I0115 09:28:52.430113   13224 logs.go:284] 1 containers: [93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa]
	I0115 09:28:52.430194   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:52.435434   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 09:28:52.435521   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 09:28:52.631528   13224 cri.go:89] found id: "eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484"
	I0115 09:28:52.631555   13224 cri.go:89] found id: ""
	I0115 09:28:52.631564   13224 logs.go:284] 1 containers: [eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484]
	I0115 09:28:52.631614   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:52.638880   13224 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 09:28:52.638945   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 09:28:52.728807   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:52.742896   13224 cri.go:89] found id: "d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa"
	I0115 09:28:52.742923   13224 cri.go:89] found id: ""
	I0115 09:28:52.742934   13224 logs.go:284] 1 containers: [d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa]
	I0115 09:28:52.742987   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:52.746362   13224 logs.go:123] Gathering logs for kube-proxy [93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa] ...
	I0115 09:28:52.746391   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa"
	I0115 09:28:52.751549   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:52.842938   13224 logs.go:123] Gathering logs for kube-controller-manager [eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484] ...
	I0115 09:28:52.842973   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484"
	I0115 09:28:52.847364   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:52.973735   13224 logs.go:123] Gathering logs for CRI-O ...
	I0115 09:28:52.973768   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 09:28:53.111379   13224 logs.go:123] Gathering logs for kubelet ...
	I0115 09:28:53.111417   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 09:28:53.161198   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:53.227084   13224 logs.go:123] Gathering logs for dmesg ...
	I0115 09:28:53.227125   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 09:28:53.239980   13224 logs.go:123] Gathering logs for describe nodes ...
	I0115 09:28:53.240012   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 09:28:53.252704   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:53.346857   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:53.455168   13224 logs.go:123] Gathering logs for kube-scheduler [2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb] ...
	I0115 09:28:53.455195   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb"
	I0115 09:28:53.498739   13224 logs.go:123] Gathering logs for kindnet [d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa] ...
	I0115 09:28:53.498785   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa"
	I0115 09:28:53.557197   13224 logs.go:123] Gathering logs for container status ...
	I0115 09:28:53.557236   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 09:28:53.639131   13224 logs.go:123] Gathering logs for kube-apiserver [5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113] ...
	I0115 09:28:53.639172   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113"
	I0115 09:28:53.661908   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:53.741083   13224 logs.go:123] Gathering logs for etcd [113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10] ...
	I0115 09:28:53.741143   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10"
	I0115 09:28:53.752328   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:53.842477   13224 logs.go:123] Gathering logs for coredns [5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4] ...
	I0115 09:28:53.842513   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4"
	I0115 09:28:53.846088   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:54.160672   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:54.252087   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:54.345720   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:54.661338   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:54.751211   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:54.846866   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:55.161370   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:55.251339   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:55.345478   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:55.661319   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:55.751813   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:55.846471   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:56.162656   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:56.252419   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:56.346622   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:56.383899   13224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 09:28:56.396421   13224 api_server.go:72] duration metric: took 1m14.461805975s to wait for apiserver process to appear ...
	I0115 09:28:56.396444   13224 api_server.go:88] waiting for apiserver healthz status ...
	I0115 09:28:56.396480   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 09:28:56.396553   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 09:28:56.444350   13224 cri.go:89] found id: "5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113"
	I0115 09:28:56.444376   13224 cri.go:89] found id: ""
	I0115 09:28:56.444386   13224 logs.go:284] 1 containers: [5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113]
	I0115 09:28:56.444444   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:56.448269   13224 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 09:28:56.448333   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 09:28:56.543581   13224 cri.go:89] found id: "113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10"
	I0115 09:28:56.543604   13224 cri.go:89] found id: ""
	I0115 09:28:56.543614   13224 logs.go:284] 1 containers: [113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10]
	I0115 09:28:56.543669   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:56.547154   13224 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 09:28:56.547216   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 09:28:56.650651   13224 cri.go:89] found id: "5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4"
	I0115 09:28:56.650677   13224 cri.go:89] found id: ""
	I0115 09:28:56.650686   13224 logs.go:284] 1 containers: [5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4]
	I0115 09:28:56.650745   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:56.654896   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 09:28:56.654962   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 09:28:56.661860   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:56.752708   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:56.834478   13224 cri.go:89] found id: "2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb"
	I0115 09:28:56.834507   13224 cri.go:89] found id: ""
	I0115 09:28:56.834519   13224 logs.go:284] 1 containers: [2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb]
	I0115 09:28:56.834574   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:56.838783   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 09:28:56.838851   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 09:28:56.846298   13224 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 09:28:56.876325   13224 cri.go:89] found id: "93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa"
	I0115 09:28:56.876347   13224 cri.go:89] found id: ""
	I0115 09:28:56.876354   13224 logs.go:284] 1 containers: [93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa]
	I0115 09:28:56.876393   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:56.879980   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 09:28:56.880044   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 09:28:56.951998   13224 cri.go:89] found id: "eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484"
	I0115 09:28:56.952020   13224 cri.go:89] found id: ""
	I0115 09:28:56.952028   13224 logs.go:284] 1 containers: [eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484]
	I0115 09:28:56.952087   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:56.955484   13224 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 09:28:56.955535   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 09:28:56.990543   13224 cri.go:89] found id: "d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa"
	I0115 09:28:56.990569   13224 cri.go:89] found id: ""
	I0115 09:28:56.990577   13224 logs.go:284] 1 containers: [d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa]
	I0115 09:28:56.990619   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:28:56.994027   13224 logs.go:123] Gathering logs for etcd [113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10] ...
	I0115 09:28:56.994048   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10"
	I0115 09:28:57.036671   13224 logs.go:123] Gathering logs for kube-controller-manager [eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484] ...
	I0115 09:28:57.036714   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484"
	I0115 09:28:57.099447   13224 logs.go:123] Gathering logs for kindnet [d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa] ...
	I0115 09:28:57.099498   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa"
	I0115 09:28:57.159916   13224 logs.go:123] Gathering logs for CRI-O ...
	I0115 09:28:57.159943   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 09:28:57.236430   13224 logs.go:123] Gathering logs for kubelet ...
	I0115 09:28:57.236465   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 09:28:57.259626   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:57.261805   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:57.326138   13224 logs.go:123] Gathering logs for dmesg ...
	I0115 09:28:57.326172   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 09:28:57.338918   13224 logs.go:123] Gathering logs for describe nodes ...
	I0115 09:28:57.338948   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 09:28:57.346147   13224 kapi.go:107] duration metric: took 1m9.005601737s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0115 09:28:57.484260   13224 logs.go:123] Gathering logs for kube-apiserver [5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113] ...
	I0115 09:28:57.484299   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113"
	I0115 09:28:57.530269   13224 logs.go:123] Gathering logs for container status ...
	I0115 09:28:57.530309   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 09:28:57.577232   13224 logs.go:123] Gathering logs for coredns [5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4] ...
	I0115 09:28:57.577262   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4"
	I0115 09:28:57.660433   13224 logs.go:123] Gathering logs for kube-scheduler [2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb] ...
	I0115 09:28:57.660461   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb"
	I0115 09:28:57.660746   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:57.753083   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:57.764179   13224 logs.go:123] Gathering logs for kube-proxy [93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa] ...
	I0115 09:28:57.764216   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa"
	I0115 09:28:58.161228   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:58.251754   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:58.661343   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:58.751569   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:59.161243   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:59.255604   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:28:59.661545   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 09:28:59.752400   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:00.160953   13224 kapi.go:107] duration metric: took 1m7.503719163s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0115 09:29:00.163405   13224 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-154292 cluster.
	I0115 09:29:00.164916   13224 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0115 09:29:00.166712   13224 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0115 09:29:00.252327   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:00.359632   13224 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0115 09:29:00.364236   13224 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0115 09:29:00.365564   13224 api_server.go:141] control plane version: v1.28.4
	I0115 09:29:00.365589   13224 api_server.go:131] duration metric: took 3.969137374s to wait for apiserver health ...
	I0115 09:29:00.365600   13224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 09:29:00.365632   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0115 09:29:00.365684   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0115 09:29:00.425666   13224 cri.go:89] found id: "5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113"
	I0115 09:29:00.425692   13224 cri.go:89] found id: ""
	I0115 09:29:00.425702   13224 logs.go:284] 1 containers: [5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113]
	I0115 09:29:00.425848   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:29:00.429731   13224 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0115 09:29:00.429797   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0115 09:29:00.467757   13224 cri.go:89] found id: "113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10"
	I0115 09:29:00.467782   13224 cri.go:89] found id: ""
	I0115 09:29:00.467792   13224 logs.go:284] 1 containers: [113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10]
	I0115 09:29:00.467840   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:29:00.471463   13224 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0115 09:29:00.471540   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0115 09:29:00.506352   13224 cri.go:89] found id: "5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4"
	I0115 09:29:00.506376   13224 cri.go:89] found id: ""
	I0115 09:29:00.506387   13224 logs.go:284] 1 containers: [5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4]
	I0115 09:29:00.506442   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:29:00.509788   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0115 09:29:00.509856   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0115 09:29:00.551411   13224 cri.go:89] found id: "2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb"
	I0115 09:29:00.551485   13224 cri.go:89] found id: ""
	I0115 09:29:00.551499   13224 logs.go:284] 1 containers: [2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb]
	I0115 09:29:00.551562   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:29:00.554793   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0115 09:29:00.554856   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0115 09:29:00.587833   13224 cri.go:89] found id: "93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa"
	I0115 09:29:00.587852   13224 cri.go:89] found id: ""
	I0115 09:29:00.587860   13224 logs.go:284] 1 containers: [93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa]
	I0115 09:29:00.587901   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:29:00.591233   13224 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0115 09:29:00.591297   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0115 09:29:00.626552   13224 cri.go:89] found id: "eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484"
	I0115 09:29:00.626641   13224 cri.go:89] found id: ""
	I0115 09:29:00.626661   13224 logs.go:284] 1 containers: [eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484]
	I0115 09:29:00.626733   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:29:00.630546   13224 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0115 09:29:00.630610   13224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0115 09:29:00.665634   13224 cri.go:89] found id: "d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa"
	I0115 09:29:00.665654   13224 cri.go:89] found id: ""
	I0115 09:29:00.665661   13224 logs.go:284] 1 containers: [d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa]
	I0115 09:29:00.665711   13224 ssh_runner.go:195] Run: which crictl
	I0115 09:29:00.668755   13224 logs.go:123] Gathering logs for kubelet ...
	I0115 09:29:00.668781   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0115 09:29:00.740510   13224 logs.go:123] Gathering logs for describe nodes ...
	I0115 09:29:00.740544   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0115 09:29:00.751543   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:00.839296   13224 logs.go:123] Gathering logs for kube-controller-manager [eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484] ...
	I0115 09:29:00.839324   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484"
	I0115 09:29:00.898457   13224 logs.go:123] Gathering logs for CRI-O ...
	I0115 09:29:00.898493   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0115 09:29:00.981299   13224 logs.go:123] Gathering logs for kube-scheduler [2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb] ...
	I0115 09:29:00.981338   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb"
	I0115 09:29:01.041611   13224 logs.go:123] Gathering logs for kube-proxy [93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa] ...
	I0115 09:29:01.041647   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa"
	I0115 09:29:01.135662   13224 logs.go:123] Gathering logs for kindnet [d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa] ...
	I0115 09:29:01.135692   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa"
	I0115 09:29:01.175683   13224 logs.go:123] Gathering logs for container status ...
	I0115 09:29:01.175712   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0115 09:29:01.252393   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:01.271281   13224 logs.go:123] Gathering logs for dmesg ...
	I0115 09:29:01.271320   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0115 09:29:01.283849   13224 logs.go:123] Gathering logs for kube-apiserver [5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113] ...
	I0115 09:29:01.283879   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113"
	I0115 09:29:01.365846   13224 logs.go:123] Gathering logs for etcd [113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10] ...
	I0115 09:29:01.365891   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10"
	I0115 09:29:01.446917   13224 logs.go:123] Gathering logs for coredns [5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4] ...
	I0115 09:29:01.446972   13224 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4"
	I0115 09:29:01.752757   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:02.251812   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:02.751751   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:03.252292   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:03.750886   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:04.061755   13224 system_pods.go:59] 19 kube-system pods found
	I0115 09:29:04.061791   13224 system_pods.go:61] "coredns-5dd5756b68-pnv7x" [58f7047d-09a2-4378-9c9c-b1f71dd0f86a] Running
	I0115 09:29:04.061800   13224 system_pods.go:61] "csi-hostpath-attacher-0" [3375631e-7718-4f89-8d58-f5053672a092] Running
	I0115 09:29:04.061806   13224 system_pods.go:61] "csi-hostpath-resizer-0" [b59f0707-2794-4aa2-9222-9a9f90f2e7e4] Running
	I0115 09:29:04.061819   13224 system_pods.go:61] "csi-hostpathplugin-7vm76" [e25ed526-e822-4437-96c8-cacfa9beef03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 09:29:04.061830   13224 system_pods.go:61] "etcd-addons-154292" [ae568b79-c355-4473-82b7-d697b7174054] Running
	I0115 09:29:04.061843   13224 system_pods.go:61] "kindnet-k8djz" [6fe9a590-c833-4566-a303-d1639e6065e2] Running
	I0115 09:29:04.061850   13224 system_pods.go:61] "kube-apiserver-addons-154292" [cfc3cd83-1361-4815-b5de-b78eb9d5a621] Running
	I0115 09:29:04.061865   13224 system_pods.go:61] "kube-controller-manager-addons-154292" [06b6d885-d4eb-4ffb-ad2c-340636c2f710] Running
	I0115 09:29:04.061872   13224 system_pods.go:61] "kube-ingress-dns-minikube" [22877b5a-3a4d-4595-8cf4-46db29f0d7fa] Running
	I0115 09:29:04.061878   13224 system_pods.go:61] "kube-proxy-p8h22" [f0766c1e-5340-470f-90cc-2e7da2a71872] Running
	I0115 09:29:04.061888   13224 system_pods.go:61] "kube-scheduler-addons-154292" [038f065c-4316-46a7-be46-d6dfda488688] Running
	I0115 09:29:04.061898   13224 system_pods.go:61] "metrics-server-7c66d45ddc-7q98p" [d5d5ebfd-2bbf-4607-b6c5-2d877e1f6c24] Running
	I0115 09:29:04.061908   13224 system_pods.go:61] "nvidia-device-plugin-daemonset-jbhds" [3a979418-cfab-4d46-9160-e4a887d9aea9] Running
	I0115 09:29:04.061917   13224 system_pods.go:61] "registry-5v2wn" [2d0e5c92-7366-42c3-8b78-10a21aa56b21] Running
	I0115 09:29:04.061927   13224 system_pods.go:61] "registry-proxy-49v8j" [4ed4e42b-4d38-4db1-a11f-5dc29a2b27de] Running
	I0115 09:29:04.061933   13224 system_pods.go:61] "snapshot-controller-58dbcc7b99-bppwp" [c9c9f772-d559-49c4-aa45-5e8b69d0ed60] Running
	I0115 09:29:04.061943   13224 system_pods.go:61] "snapshot-controller-58dbcc7b99-twdbw" [6323622d-6305-47c8-b2e9-ba2b49ccb29f] Running
	I0115 09:29:04.061949   13224 system_pods.go:61] "storage-provisioner" [68dbe023-38af-46b8-b353-9a057948b998] Running
	I0115 09:29:04.061959   13224 system_pods.go:61] "tiller-deploy-7b677967b9-r46wb" [f28e7e4a-28ae-40b8-8387-fa7698c378cd] Running
	I0115 09:29:04.061967   13224 system_pods.go:74] duration metric: took 3.696360635s to wait for pod list to return data ...
	I0115 09:29:04.061980   13224 default_sa.go:34] waiting for default service account to be created ...
	I0115 09:29:04.064249   13224 default_sa.go:45] found service account: "default"
	I0115 09:29:04.064270   13224 default_sa.go:55] duration metric: took 2.279518ms for default service account to be created ...
	I0115 09:29:04.064280   13224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 09:29:04.073328   13224 system_pods.go:86] 19 kube-system pods found
	I0115 09:29:04.073405   13224 system_pods.go:89] "coredns-5dd5756b68-pnv7x" [58f7047d-09a2-4378-9c9c-b1f71dd0f86a] Running
	I0115 09:29:04.073417   13224 system_pods.go:89] "csi-hostpath-attacher-0" [3375631e-7718-4f89-8d58-f5053672a092] Running
	I0115 09:29:04.073427   13224 system_pods.go:89] "csi-hostpath-resizer-0" [b59f0707-2794-4aa2-9222-9a9f90f2e7e4] Running
	I0115 09:29:04.073439   13224 system_pods.go:89] "csi-hostpathplugin-7vm76" [e25ed526-e822-4437-96c8-cacfa9beef03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 09:29:04.073447   13224 system_pods.go:89] "etcd-addons-154292" [ae568b79-c355-4473-82b7-d697b7174054] Running
	I0115 09:29:04.073464   13224 system_pods.go:89] "kindnet-k8djz" [6fe9a590-c833-4566-a303-d1639e6065e2] Running
	I0115 09:29:04.073471   13224 system_pods.go:89] "kube-apiserver-addons-154292" [cfc3cd83-1361-4815-b5de-b78eb9d5a621] Running
	I0115 09:29:04.073479   13224 system_pods.go:89] "kube-controller-manager-addons-154292" [06b6d885-d4eb-4ffb-ad2c-340636c2f710] Running
	I0115 09:29:04.073486   13224 system_pods.go:89] "kube-ingress-dns-minikube" [22877b5a-3a4d-4595-8cf4-46db29f0d7fa] Running
	I0115 09:29:04.073492   13224 system_pods.go:89] "kube-proxy-p8h22" [f0766c1e-5340-470f-90cc-2e7da2a71872] Running
	I0115 09:29:04.073502   13224 system_pods.go:89] "kube-scheduler-addons-154292" [038f065c-4316-46a7-be46-d6dfda488688] Running
	I0115 09:29:04.073510   13224 system_pods.go:89] "metrics-server-7c66d45ddc-7q98p" [d5d5ebfd-2bbf-4607-b6c5-2d877e1f6c24] Running
	I0115 09:29:04.073521   13224 system_pods.go:89] "nvidia-device-plugin-daemonset-jbhds" [3a979418-cfab-4d46-9160-e4a887d9aea9] Running
	I0115 09:29:04.073528   13224 system_pods.go:89] "registry-5v2wn" [2d0e5c92-7366-42c3-8b78-10a21aa56b21] Running
	I0115 09:29:04.073541   13224 system_pods.go:89] "registry-proxy-49v8j" [4ed4e42b-4d38-4db1-a11f-5dc29a2b27de] Running
	I0115 09:29:04.073548   13224 system_pods.go:89] "snapshot-controller-58dbcc7b99-bppwp" [c9c9f772-d559-49c4-aa45-5e8b69d0ed60] Running
	I0115 09:29:04.073561   13224 system_pods.go:89] "snapshot-controller-58dbcc7b99-twdbw" [6323622d-6305-47c8-b2e9-ba2b49ccb29f] Running
	I0115 09:29:04.073568   13224 system_pods.go:89] "storage-provisioner" [68dbe023-38af-46b8-b353-9a057948b998] Running
	I0115 09:29:04.073575   13224 system_pods.go:89] "tiller-deploy-7b677967b9-r46wb" [f28e7e4a-28ae-40b8-8387-fa7698c378cd] Running
	I0115 09:29:04.073584   13224 system_pods.go:126] duration metric: took 9.296581ms to wait for k8s-apps to be running ...
	I0115 09:29:04.073598   13224 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 09:29:04.073664   13224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:29:04.127660   13224 system_svc.go:56] duration metric: took 54.052171ms WaitForService to wait for kubelet.
	I0115 09:29:04.127691   13224 kubeadm.go:581] duration metric: took 1m22.193080034s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 09:29:04.127717   13224 node_conditions.go:102] verifying NodePressure condition ...
	I0115 09:29:04.133141   13224 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0115 09:29:04.133224   13224 node_conditions.go:123] node cpu capacity is 8
	I0115 09:29:04.133247   13224 node_conditions.go:105] duration metric: took 5.522573ms to run NodePressure ...
	I0115 09:29:04.133263   13224 start.go:228] waiting for startup goroutines ...
	I0115 09:29:04.251203   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:04.750730   13224 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 09:29:05.250892   13224 kapi.go:107] duration metric: took 1m16.004944205s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0115 09:29:05.253607   13224 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, inspektor-gadget, helm-tiller, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0115 09:29:05.257059   13224 addons.go:505] enable addons completed in 1m23.866097101s: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns inspektor-gadget helm-tiller metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0115 09:29:05.257123   13224 start.go:233] waiting for cluster config update ...
	I0115 09:29:05.257149   13224 start.go:242] writing updated cluster config ...
	I0115 09:29:05.257440   13224 ssh_runner.go:195] Run: rm -f paused
	I0115 09:29:05.307263   13224 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 09:29:05.309490   13224 out.go:177] * Done! kubectl is now configured to use "addons-154292" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 15 09:31:45 addons-154292 crio[950]: time="2024-01-15 09:31:45.030155724Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=e0898d22-5957-49e0-b0a8-977d6c7acff7 name=/runtime.v1.ImageService/PullImage
	Jan 15 09:31:45 addons-154292 crio[950]: time="2024-01-15 09:31:45.030939596Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=e0ab0177-4d6e-4b6e-8f51-4a204e9b8faa name=/runtime.v1.ImageService/ImageStatus
	Jan 15 09:31:45 addons-154292 crio[950]: time="2024-01-15 09:31:45.032151096Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=e0ab0177-4d6e-4b6e-8f51-4a204e9b8faa name=/runtime.v1.ImageService/ImageStatus
	Jan 15 09:31:45 addons-154292 crio[950]: time="2024-01-15 09:31:45.033032538Z" level=info msg="Creating container: default/hello-world-app-5d77478584-vs756/hello-world-app" id=38acf2c2-d2a9-43b2-9aa7-138719a45521 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 15 09:31:45 addons-154292 crio[950]: time="2024-01-15 09:31:45.033145228Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 15 09:31:45 addons-154292 crio[950]: time="2024-01-15 09:31:45.084313873Z" level=info msg="Created container fc21a6d925d0375d2b540694a6ac2dd0f9dfbf2b52c8f6047dafc706c19a5d07: default/hello-world-app-5d77478584-vs756/hello-world-app" id=38acf2c2-d2a9-43b2-9aa7-138719a45521 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 15 09:31:45 addons-154292 crio[950]: time="2024-01-15 09:31:45.084889774Z" level=info msg="Starting container: fc21a6d925d0375d2b540694a6ac2dd0f9dfbf2b52c8f6047dafc706c19a5d07" id=270979fc-a041-4598-870a-df98a9a51908 name=/runtime.v1.RuntimeService/StartContainer
	Jan 15 09:31:45 addons-154292 crio[950]: time="2024-01-15 09:31:45.090765593Z" level=info msg="Started container" PID=10800 containerID=fc21a6d925d0375d2b540694a6ac2dd0f9dfbf2b52c8f6047dafc706c19a5d07 description=default/hello-world-app-5d77478584-vs756/hello-world-app id=270979fc-a041-4598-870a-df98a9a51908 name=/runtime.v1.RuntimeService/StartContainer sandboxID=416901aa0d9f8d6fb14186364d1de7091d2013f479789cbdf45982f635b04c04
	Jan 15 09:31:45 addons-154292 crio[950]: time="2024-01-15 09:31:45.603065975Z" level=info msg="Removing container: 9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c" id=1509a1b8-a0a5-4a42-bec6-8ffb4a927515 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 15 09:31:45 addons-154292 crio[950]: time="2024-01-15 09:31:45.615852199Z" level=info msg="Removed container 9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=1509a1b8-a0a5-4a42-bec6-8ffb4a927515 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 15 09:31:47 addons-154292 crio[950]: time="2024-01-15 09:31:47.172139268Z" level=info msg="Stopping container: 6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23 (timeout: 2s)" id=2b5e22d6-c125-4daf-9395-398a66f0c496 name=/runtime.v1.RuntimeService/StopContainer
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.178650402Z" level=warning msg="Stopping container 6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=2b5e22d6-c125-4daf-9395-398a66f0c496 name=/runtime.v1.RuntimeService/StopContainer
	Jan 15 09:31:49 addons-154292 conmon[5987]: conmon 6fbc8b4ffcf017dcb081 <ninfo>: container 5999 exited with status 137
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.312489454Z" level=info msg="Stopped container 6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23: ingress-nginx/ingress-nginx-controller-69cff4fd79-jbk72/controller" id=2b5e22d6-c125-4daf-9395-398a66f0c496 name=/runtime.v1.RuntimeService/StopContainer
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.312953677Z" level=info msg="Stopping pod sandbox: 1504489f26cba8b70040879e5e1e9ff1e786b232621c0ae73de70ac0e462ee23" id=e6098b79-0a92-4fb2-8bed-a3f36119f7fb name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.316183569Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-DWY6OREHCRFC4CTR - [0:0]\n:KUBE-HP-HZU7PU6NCO6IGN2J - [0:0]\n-X KUBE-HP-DWY6OREHCRFC4CTR\n-X KUBE-HP-HZU7PU6NCO6IGN2J\nCOMMIT\n"
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.317554722Z" level=info msg="Closing host port tcp:80"
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.317591176Z" level=info msg="Closing host port tcp:443"
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.319006796Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.319024079Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.319159725Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-jbk72 Namespace:ingress-nginx ID:1504489f26cba8b70040879e5e1e9ff1e786b232621c0ae73de70ac0e462ee23 UID:58c36e21-9dad-41d4-810b-565c2d650e0c NetNS:/var/run/netns/ed3c3f2b-5499-43a3-9ce3-477eeaf82fcd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.319276153Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-jbk72 from CNI network \"kindnet\" (type=ptp)"
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.358646302Z" level=info msg="Stopped pod sandbox: 1504489f26cba8b70040879e5e1e9ff1e786b232621c0ae73de70ac0e462ee23" id=e6098b79-0a92-4fb2-8bed-a3f36119f7fb name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.613369709Z" level=info msg="Removing container: 6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23" id=41fc26af-8349-407f-93e3-050a533f1485 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 15 09:31:49 addons-154292 crio[950]: time="2024-01-15 09:31:49.627135885Z" level=info msg="Removed container 6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23: ingress-nginx/ingress-nginx-controller-69cff4fd79-jbk72/controller" id=41fc26af-8349-407f-93e3-050a533f1485 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fc21a6d925d03       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      9 seconds ago       Running             hello-world-app           0                   416901aa0d9f8       hello-world-app-5d77478584-vs756
	fde845528e4c9       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   3b37fe57ac846       nginx
	92996eeea3f91       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   d2931db9e9edc       headlamp-7ddfbb94ff-z47n9
	c77d313d53aac       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   a8f11c37e6679       gcp-auth-d4c87556c-zwqpc
	931484154c9c5       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     2                   b2d3405104f85       ingress-nginx-admission-patch-jfd5h
	2aa07dabddff9       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   dd9c38ad8d411       yakd-dashboard-9947fc6bf-nsmdk
	8d9353e6b9add       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   67aba589e401e       ingress-nginx-admission-create-9v2s2
	5f44fb53df423       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   73c0c53705228       coredns-5dd5756b68-pnv7x
	25a899c15ab74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   68a9f65db04e3       storage-provisioner
	93261a071bf35       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   e6f42fab85217       kube-proxy-p8h22
	d894efb4c44bd       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   2bb36cf6f20ce       kindnet-k8djz
	2eb28ff5e389a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   c1750cd5c4d9c       kube-scheduler-addons-154292
	113958e16b6e3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   ae2670a50e44e       etcd-addons-154292
	eef929c178340       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   874d8cae12a28       kube-controller-manager-addons-154292
	5c3ea581c5cd3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   31d4959347823       kube-apiserver-addons-154292
	
	
	==> coredns [5f44fb53df42307a78fb19225891c8ca1dec6351182efecc5dcd41f1ecbecaf4] <==
	[INFO] 10.244.0.17:35022 - 43322 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099628s
	[INFO] 10.244.0.17:60742 - 47200 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.003717931s
	[INFO] 10.244.0.17:60742 - 43875 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004148019s
	[INFO] 10.244.0.17:44540 - 18468 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004125201s
	[INFO] 10.244.0.17:44540 - 27936 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00419187s
	[INFO] 10.244.0.17:59853 - 4019 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004251877s
	[INFO] 10.244.0.17:59853 - 34480 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004646455s
	[INFO] 10.244.0.17:45878 - 19417 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008447s
	[INFO] 10.244.0.17:45878 - 37316 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116821s
	[INFO] 10.244.0.21:38953 - 9260 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000203195s
	[INFO] 10.244.0.21:45892 - 65247 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000277889s
	[INFO] 10.244.0.21:39676 - 32445 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010402s
	[INFO] 10.244.0.21:60313 - 59080 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161574s
	[INFO] 10.244.0.21:57799 - 34924 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101332s
	[INFO] 10.244.0.21:44408 - 8220 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016585s
	[INFO] 10.244.0.21:45299 - 48329 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.00635746s
	[INFO] 10.244.0.21:45201 - 60966 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006646414s
	[INFO] 10.244.0.21:52750 - 8868 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004167621s
	[INFO] 10.244.0.21:50184 - 43777 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005016168s
	[INFO] 10.244.0.21:34089 - 30160 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004187649s
	[INFO] 10.244.0.21:34260 - 28945 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004690512s
	[INFO] 10.244.0.21:46129 - 43326 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000715453s
	[INFO] 10.244.0.21:40558 - 60182 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000720267s
	[INFO] 10.244.0.23:52214 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000253807s
	[INFO] 10.244.0.23:35047 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000123609s
	
	
	==> describe nodes <==
	Name:               addons-154292
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-154292
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=addons-154292
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T09_27_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-154292
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 09:27:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-154292
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 09:31:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 09:30:31 +0000   Mon, 15 Jan 2024 09:27:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 09:30:31 +0000   Mon, 15 Jan 2024 09:27:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 09:30:31 +0000   Mon, 15 Jan 2024 09:27:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 09:30:31 +0000   Mon, 15 Jan 2024 09:28:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-154292
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6db2b0c4b6c4514b133e5a1189c4dfe
	  System UUID:                88651f84-3b46-4b61-9584-bb9101040ad6
	  Boot ID:                    cfbd0cf6-9096-4b85-b302-a1df984ff6e8
	  Kernel Version:             5.15.0-1048-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-vs756         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-d4c87556c-zwqpc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  headlamp                    headlamp-7ddfbb94ff-z47n9                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-5dd5756b68-pnv7x                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m13s
	  kube-system                 etcd-addons-154292                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m26s
	  kube-system                 kindnet-k8djz                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m13s
	  kube-system                 kube-apiserver-addons-154292             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-controller-manager-addons-154292    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-proxy-p8h22                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-scheduler-addons-154292             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-nsmdk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m32s (x8 over 4m32s)  kubelet          Node addons-154292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s (x8 over 4m32s)  kubelet          Node addons-154292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s (x8 over 4m32s)  kubelet          Node addons-154292 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m26s                  kubelet          Node addons-154292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s                  kubelet          Node addons-154292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s                  kubelet          Node addons-154292 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m13s                  node-controller  Node addons-154292 event: Registered Node addons-154292 in Controller
	  Normal  NodeReady                3m39s                  kubelet          Node addons-154292 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000792] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000778] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.716960] systemd[1]: /lib/systemd/system/cri-docker.service:1: Assignment outside of section. Ignoring.
	[  +0.045698] systemd[1]: /lib/systemd/system/cri-docker.socket:1: Assignment outside of section. Ignoring.
	[  +0.001504] systemd[1]: cri-docker.socket: Unit has no Listen setting (ListenStream=, ListenDatagram=, ListenFIFO=, ...). Refusing.
	[  +0.006411] systemd[1]: cri-docker.socket: Cannot add dependency job, ignoring: Unit cri-docker.socket has a bad unit file setting.
	[  +8.172773] kauditd_printk_skb: 36 callbacks suppressed
	[Jan15 09:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: fa d3 cb ae 5b ec 06 67 71 7d b4 20 08 00
	[  +1.019567] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa d3 cb ae 5b ec 06 67 71 7d b4 20 08 00
	[  +2.015903] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa d3 cb ae 5b ec 06 67 71 7d b4 20 08 00
	[  +4.223665] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa d3 cb ae 5b ec 06 67 71 7d b4 20 08 00
	[  +8.191477] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa d3 cb ae 5b ec 06 67 71 7d b4 20 08 00
	[Jan15 09:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa d3 cb ae 5b ec 06 67 71 7d b4 20 08 00
	[ +33.533634] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa d3 cb ae 5b ec 06 67 71 7d b4 20 08 00
	
	
	==> etcd [113958e16b6e3bc5a636e9eb9b70615c8aa1ef7f8d3c7498601798ec4d149c10] <==
	{"level":"info","ts":"2024-01-15T09:27:45.050273Z","caller":"traceutil/trace.go:171","msg":"trace[875374340] linearizableReadLoop","detail":"{readStateIndex:432; appliedIndex:431; }","duration":"115.653502ms","start":"2024-01-15T09:27:44.934611Z","end":"2024-01-15T09:27:45.050265Z","steps":["trace[875374340] 'read index received'  (duration: 28.529µs)","trace[875374340] 'applied index is now lower than readState.Index'  (duration: 115.624022ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T09:27:45.050364Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.768878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T09:27:45.050384Z","caller":"traceutil/trace.go:171","msg":"trace[1138197749] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:422; }","duration":"115.804775ms","start":"2024-01-15T09:27:44.934573Z","end":"2024-01-15T09:27:45.050378Z","steps":["trace[1138197749] 'agreement among raft nodes before linearized reading'  (duration: 115.719344ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:27:46.845965Z","caller":"traceutil/trace.go:171","msg":"trace[1891454753] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"102.148006ms","start":"2024-01-15T09:27:46.7438Z","end":"2024-01-15T09:27:46.845948Z","steps":["trace[1891454753] 'process raft request'  (duration: 94.847278ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:27:46.846127Z","caller":"traceutil/trace.go:171","msg":"trace[114171610] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"102.276533ms","start":"2024-01-15T09:27:46.743834Z","end":"2024-01-15T09:27:46.84611Z","steps":["trace[114171610] 'process raft request'  (duration: 102.033003ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:27:46.846731Z","caller":"traceutil/trace.go:171","msg":"trace[300359449] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"100.375188ms","start":"2024-01-15T09:27:46.746342Z","end":"2024-01-15T09:27:46.846718Z","steps":["trace[300359449] 'process raft request'  (duration: 99.562587ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:27:46.925725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.234749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/yakd-dashboard/\" range_end:\"/registry/resourcequotas/yakd-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T09:27:46.925867Z","caller":"traceutil/trace.go:171","msg":"trace[1881346585] range","detail":"{range_begin:/registry/resourcequotas/yakd-dashboard/; range_end:/registry/resourcequotas/yakd-dashboard0; response_count:0; response_revision:503; }","duration":"100.388468ms","start":"2024-01-15T09:27:46.825463Z","end":"2024-01-15T09:27:46.925852Z","steps":["trace[1881346585] 'agreement among raft nodes before linearized reading'  (duration: 100.171635ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:27:46.926014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.516599ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T09:27:46.92608Z","caller":"traceutil/trace.go:171","msg":"trace[1932142030] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:503; }","duration":"100.587814ms","start":"2024-01-15T09:27:46.825477Z","end":"2024-01-15T09:27:46.926065Z","steps":["trace[1932142030] 'agreement among raft nodes before linearized reading'  (duration: 100.491292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:28:51.449429Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.595752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T09:28:51.449574Z","caller":"traceutil/trace.go:171","msg":"trace[2097424623] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1173; }","duration":"117.754289ms","start":"2024-01-15T09:28:51.331805Z","end":"2024-01-15T09:28:51.449559Z","steps":["trace[2097424623] 'range keys from in-memory index tree'  (duration: 117.508907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:28:51.449831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.134216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14593"}
	{"level":"info","ts":"2024-01-15T09:28:51.449907Z","caller":"traceutil/trace.go:171","msg":"trace[332962205] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1173; }","duration":"106.214893ms","start":"2024-01-15T09:28:51.343682Z","end":"2024-01-15T09:28:51.449897Z","steps":["trace[332962205] 'range keys from in-memory index tree'  (duration: 105.630868ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T09:28:51.453297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.926474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-jbhds\" ","response":"range_response_count:1 size:3967"}
	{"level":"info","ts":"2024-01-15T09:28:51.453359Z","caller":"traceutil/trace.go:171","msg":"trace[1467486426] range","detail":"{range_begin:/registry/pods/kube-system/nvidia-device-plugin-daemonset-jbhds; range_end:; response_count:1; response_revision:1173; }","duration":"100.997476ms","start":"2024-01-15T09:28:51.352349Z","end":"2024-01-15T09:28:51.453346Z","steps":["trace[1467486426] 'range keys from in-memory index tree'  (duration: 100.799797ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:28:57.257425Z","caller":"traceutil/trace.go:171","msg":"trace[384116106] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"210.355824ms","start":"2024-01-15T09:28:57.047041Z","end":"2024-01-15T09:28:57.257396Z","steps":["trace[384116106] 'process raft request'  (duration: 190.73479ms)","trace[384116106] 'compare'  (duration: 19.448494ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-15T09:29:02.560883Z","caller":"traceutil/trace.go:171","msg":"trace[1778104857] transaction","detail":"{read_only:false; response_revision:1241; number_of_response:1; }","duration":"122.888685ms","start":"2024-01-15T09:29:02.437971Z","end":"2024-01-15T09:29:02.56086Z","steps":["trace[1778104857] 'process raft request'  (duration: 62.017314ms)","trace[1778104857] 'compare'  (duration: 60.764315ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-15T09:29:19.590228Z","caller":"traceutil/trace.go:171","msg":"trace[1314075876] transaction","detail":"{read_only:false; response_revision:1400; number_of_response:1; }","duration":"114.658522ms","start":"2024-01-15T09:29:19.475543Z","end":"2024-01-15T09:29:19.590202Z","steps":["trace[1314075876] 'process raft request'  (duration: 114.475825ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:29:19.766561Z","caller":"traceutil/trace.go:171","msg":"trace[1442819361] transaction","detail":"{read_only:false; response_revision:1402; number_of_response:1; }","duration":"167.626864ms","start":"2024-01-15T09:29:19.598914Z","end":"2024-01-15T09:29:19.766541Z","steps":["trace[1442819361] 'process raft request'  (duration: 167.511082ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:29:25.325534Z","caller":"traceutil/trace.go:171","msg":"trace[606412222] transaction","detail":"{read_only:false; response_revision:1508; number_of_response:1; }","duration":"117.866203ms","start":"2024-01-15T09:29:25.207644Z","end":"2024-01-15T09:29:25.32551Z","steps":["trace[606412222] 'process raft request'  (duration: 117.721057ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T09:29:25.472981Z","caller":"traceutil/trace.go:171","msg":"trace[377823768] linearizableReadLoop","detail":"{readStateIndex:1556; appliedIndex:1555; }","duration":"136.694927ms","start":"2024-01-15T09:29:25.336266Z","end":"2024-01-15T09:29:25.472961Z","steps":["trace[377823768] 'read index received'  (duration: 89.101506ms)","trace[377823768] 'applied index is now lower than readState.Index'  (duration: 47.592405ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-15T09:29:25.473043Z","caller":"traceutil/trace.go:171","msg":"trace[826960250] transaction","detail":"{read_only:false; response_revision:1509; number_of_response:1; }","duration":"138.898135ms","start":"2024-01-15T09:29:25.33413Z","end":"2024-01-15T09:29:25.473028Z","steps":["trace[826960250] 'process raft request'  (duration: 91.293101ms)","trace[826960250] 'compare'  (duration: 47.432749ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T09:29:25.473119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.820762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-15T09:29:25.473148Z","caller":"traceutil/trace.go:171","msg":"trace[652755752] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1509; }","duration":"136.895699ms","start":"2024-01-15T09:29:25.336244Z","end":"2024-01-15T09:29:25.473139Z","steps":["trace[652755752] 'agreement among raft nodes before linearized reading'  (duration: 136.798307ms)"],"step_count":1}
	
	
	==> gcp-auth [c77d313d53aacdf495a52f68916668e8fd073cdfc797d3ebba67507096f2637d] <==
	2024/01/15 09:29:06 Ready to write response ...
	2024/01/15 09:29:06 Ready to marshal response ...
	2024/01/15 09:29:06 Ready to write response ...
	2024/01/15 09:29:06 Ready to marshal response ...
	2024/01/15 09:29:06 Ready to write response ...
	2024/01/15 09:29:15 Ready to marshal response ...
	2024/01/15 09:29:15 Ready to write response ...
	2024/01/15 09:29:16 Ready to marshal response ...
	2024/01/15 09:29:16 Ready to write response ...
	2024/01/15 09:29:20 Ready to marshal response ...
	2024/01/15 09:29:20 Ready to write response ...
	2024/01/15 09:29:23 Ready to marshal response ...
	2024/01/15 09:29:23 Ready to write response ...
	2024/01/15 09:29:24 Ready to marshal response ...
	2024/01/15 09:29:24 Ready to write response ...
	2024/01/15 09:29:24 Ready to marshal response ...
	2024/01/15 09:29:24 Ready to write response ...
	2024/01/15 09:29:33 Ready to marshal response ...
	2024/01/15 09:29:33 Ready to write response ...
	2024/01/15 09:29:56 Ready to marshal response ...
	2024/01/15 09:29:56 Ready to write response ...
	2024/01/15 09:30:22 Ready to marshal response ...
	2024/01/15 09:30:22 Ready to write response ...
	2024/01/15 09:31:43 Ready to marshal response ...
	2024/01/15 09:31:43 Ready to write response ...
	
	
	==> kernel <==
	 09:31:54 up 14 min,  0 users,  load average: 1.17, 0.69, 0.33
	Linux addons-154292 5.15.0-1048-gcp #56~20.04.1-Ubuntu SMP Fri Nov 24 16:52:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [d894efb4c44bd587652af33b63b7b4ab6a1a04c6bc81acffe36fccb6f1223bfa] <==
	I0115 09:29:44.840596       1 main.go:227] handling current node
	I0115 09:29:54.852356       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:29:54.852376       1 main.go:227] handling current node
	I0115 09:30:04.856534       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:30:04.856558       1 main.go:227] handling current node
	I0115 09:30:14.868563       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:30:14.868589       1 main.go:227] handling current node
	I0115 09:30:24.872495       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:30:24.872516       1 main.go:227] handling current node
	I0115 09:30:34.884214       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:30:34.884238       1 main.go:227] handling current node
	I0115 09:30:44.888246       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:30:44.888271       1 main.go:227] handling current node
	I0115 09:30:54.900725       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:30:54.900758       1 main.go:227] handling current node
	I0115 09:31:04.904187       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:31:04.904210       1 main.go:227] handling current node
	I0115 09:31:14.916520       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:31:14.916560       1 main.go:227] handling current node
	I0115 09:31:24.920842       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:31:24.920865       1 main.go:227] handling current node
	I0115 09:31:34.932397       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:31:34.932419       1 main.go:227] handling current node
	I0115 09:31:44.936690       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:31:44.936716       1 main.go:227] handling current node
	
	
	==> kube-apiserver [5c3ea581c5cd3fbdd79b482193ab669748236ea5b4d137228ef0301e2da38113] <==
	E0115 09:29:34.594923       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0115 09:29:38.960819       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0115 09:29:49.595780       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0115 09:30:07.337584       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0115 09:30:37.774089       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:37.774135       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:37.780144       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:37.780205       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:37.787288       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:37.787427       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:37.787868       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:37.787933       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:37.797278       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:37.797339       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:37.802359       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:37.802404       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:37.807616       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:37.807661       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 09:30:37.825529       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 09:30:37.825570       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0115 09:30:38.788190       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0115 09:30:38.808793       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0115 09:30:38.835777       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0115 09:31:43.887854       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.203.94"}
	E0115 09:31:46.237466       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [eef929c178340eb0046d2aec899b9fde0571694513a986540a5ce8b9ad91e484] <==
	W0115 09:30:57.770543       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:30:57.770577       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 09:31:14.183174       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:31:14.183218       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 09:31:17.034956       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:31:17.034985       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 09:31:22.177011       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:31:22.177054       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 09:31:28.317008       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:31:28.317042       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 09:31:43.730555       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0115 09:31:43.741209       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-vs756"
	I0115 09:31:43.746392       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.079793ms"
	I0115 09:31:43.750994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="4.547034ms"
	I0115 09:31:43.751087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.67µs"
	I0115 09:31:43.758216       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="75.464µs"
	I0115 09:31:45.617452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.792622ms"
	I0115 09:31:45.617540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.185µs"
	I0115 09:31:46.159652       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0115 09:31:46.161213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="16.678µs"
	I0115 09:31:46.163731       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0115 09:31:53.871443       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:31:53.871475       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 09:31:54.585798       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 09:31:54.585829       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [93261a071bf356e3faa42d986366617316fd16ad4507c92887519699d71773aa] <==
	I0115 09:27:45.240145       1 server_others.go:69] "Using iptables proxy"
	I0115 09:27:45.739970       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0115 09:27:46.344670       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0115 09:27:46.529813       1 server_others.go:152] "Using iptables Proxier"
	I0115 09:27:46.529917       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0115 09:27:46.530024       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0115 09:27:46.530080       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 09:27:46.530377       1 server.go:846] "Version info" version="v1.28.4"
	I0115 09:27:46.530404       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 09:27:46.531759       1 config.go:188] "Starting service config controller"
	I0115 09:27:46.627687       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 09:27:46.534631       1 config.go:97] "Starting endpoint slice config controller"
	I0115 09:27:46.541967       1 config.go:315] "Starting node config controller"
	I0115 09:27:46.643844       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 09:27:46.731577       1 shared_informer.go:318] Caches are synced for node config
	I0115 09:27:46.644862       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 09:27:46.743958       1 shared_informer.go:318] Caches are synced for service config
	I0115 09:27:46.837982       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2eb28ff5e389acaaf84434df6056e19e20368e8f8805a216a9b14b05035c3feb] <==
	W0115 09:27:25.831398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 09:27:25.831398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 09:27:25.831422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 09:27:25.831422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0115 09:27:25.831491       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 09:27:25.831510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0115 09:27:25.831515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 09:27:25.831532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0115 09:27:25.831547       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0115 09:27:25.831558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 09:27:25.831562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 09:27:25.831571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0115 09:27:25.831666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 09:27:25.831711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0115 09:27:25.831989       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 09:27:25.832021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0115 09:27:26.733131       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 09:27:26.733158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 09:27:26.737431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 09:27:26.737486       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0115 09:27:26.795814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 09:27:26.795853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0115 09:27:26.803124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 09:27:26.803165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0115 09:27:27.028395       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 15 09:31:43 addons-154292 kubelet[1553]: I0115 09:31:43.895536    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvh79\" (UniqueName: \"kubernetes.io/projected/ac05ddd7-948e-42c4-8d86-f4522f7e6c1d-kube-api-access-bvh79\") pod \"hello-world-app-5d77478584-vs756\" (UID: \"ac05ddd7-948e-42c4-8d86-f4522f7e6c1d\") " pod="default/hello-world-app-5d77478584-vs756"
	Jan 15 09:31:43 addons-154292 kubelet[1553]: I0115 09:31:43.895648    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ac05ddd7-948e-42c4-8d86-f4522f7e6c1d-gcp-creds\") pod \"hello-world-app-5d77478584-vs756\" (UID: \"ac05ddd7-948e-42c4-8d86-f4522f7e6c1d\") " pod="default/hello-world-app-5d77478584-vs756"
	Jan 15 09:31:44 addons-154292 kubelet[1553]: W0115 09:31:44.081050    1553 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3da5e64e852e1ecd7c0138bbb459069368c9aa7af3b85743a24b0af83ff477e3/crio-416901aa0d9f8d6fb14186364d1de7091d2013f479789cbdf45982f635b04c04 WatchSource:0}: Error finding container 416901aa0d9f8d6fb14186364d1de7091d2013f479789cbdf45982f635b04c04: Status 404 returned error can't find the container with id 416901aa0d9f8d6fb14186364d1de7091d2013f479789cbdf45982f635b04c04
	Jan 15 09:31:45 addons-154292 kubelet[1553]: I0115 09:31:45.032155    1553 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rz5f\" (UniqueName: \"kubernetes.io/projected/22877b5a-3a4d-4595-8cf4-46db29f0d7fa-kube-api-access-6rz5f\") pod \"22877b5a-3a4d-4595-8cf4-46db29f0d7fa\" (UID: \"22877b5a-3a4d-4595-8cf4-46db29f0d7fa\") "
	Jan 15 09:31:45 addons-154292 kubelet[1553]: I0115 09:31:45.034089    1553 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22877b5a-3a4d-4595-8cf4-46db29f0d7fa-kube-api-access-6rz5f" (OuterVolumeSpecName: "kube-api-access-6rz5f") pod "22877b5a-3a4d-4595-8cf4-46db29f0d7fa" (UID: "22877b5a-3a4d-4595-8cf4-46db29f0d7fa"). InnerVolumeSpecName "kube-api-access-6rz5f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 09:31:45 addons-154292 kubelet[1553]: I0115 09:31:45.133179    1553 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6rz5f\" (UniqueName: \"kubernetes.io/projected/22877b5a-3a4d-4595-8cf4-46db29f0d7fa-kube-api-access-6rz5f\") on node \"addons-154292\" DevicePath \"\""
	Jan 15 09:31:45 addons-154292 kubelet[1553]: I0115 09:31:45.602129    1553 scope.go:117] "RemoveContainer" containerID="9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c"
	Jan 15 09:31:45 addons-154292 kubelet[1553]: I0115 09:31:45.611200    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-vs756" podStartSLOduration=1.664477169 podCreationTimestamp="2024-01-15 09:31:43 +0000 UTC" firstStartedPulling="2024-01-15 09:31:44.083760638 +0000 UTC m=+255.760566527" lastFinishedPulling="2024-01-15 09:31:45.030445134 +0000 UTC m=+256.707251014" observedRunningTime="2024-01-15 09:31:45.61050238 +0000 UTC m=+257.287308279" watchObservedRunningTime="2024-01-15 09:31:45.611161656 +0000 UTC m=+257.287967553"
	Jan 15 09:31:45 addons-154292 kubelet[1553]: I0115 09:31:45.616139    1553 scope.go:117] "RemoveContainer" containerID="9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c"
	Jan 15 09:31:45 addons-154292 kubelet[1553]: E0115 09:31:45.616598    1553 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c\": container with ID starting with 9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c not found: ID does not exist" containerID="9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c"
	Jan 15 09:31:45 addons-154292 kubelet[1553]: I0115 09:31:45.616654    1553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c"} err="failed to get container status \"9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c\": rpc error: code = NotFound desc = could not find container \"9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c\": container with ID starting with 9b8b35581a1116a7b4a249c8c303c01d10407e0754caeb9e2e5e77d84c0b6e1c not found: ID does not exist"
	Jan 15 09:31:46 addons-154292 kubelet[1553]: I0115 09:31:46.442109    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="22877b5a-3a4d-4595-8cf4-46db29f0d7fa" path="/var/lib/kubelet/pods/22877b5a-3a4d-4595-8cf4-46db29f0d7fa/volumes"
	Jan 15 09:31:46 addons-154292 kubelet[1553]: I0115 09:31:46.442519    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="630fb386-3234-453d-a67a-8fc2f0b6b670" path="/var/lib/kubelet/pods/630fb386-3234-453d-a67a-8fc2f0b6b670/volumes"
	Jan 15 09:31:46 addons-154292 kubelet[1553]: I0115 09:31:46.442980    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cdb66f8a-5d85-486b-b1c1-fa7ab8043412" path="/var/lib/kubelet/pods/cdb66f8a-5d85-486b-b1c1-fa7ab8043412/volumes"
	Jan 15 09:31:49 addons-154292 kubelet[1553]: I0115 09:31:49.463948    1553 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58c36e21-9dad-41d4-810b-565c2d650e0c-webhook-cert\") pod \"58c36e21-9dad-41d4-810b-565c2d650e0c\" (UID: \"58c36e21-9dad-41d4-810b-565c2d650e0c\") "
	Jan 15 09:31:49 addons-154292 kubelet[1553]: I0115 09:31:49.463998    1553 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lmw7\" (UniqueName: \"kubernetes.io/projected/58c36e21-9dad-41d4-810b-565c2d650e0c-kube-api-access-2lmw7\") pod \"58c36e21-9dad-41d4-810b-565c2d650e0c\" (UID: \"58c36e21-9dad-41d4-810b-565c2d650e0c\") "
	Jan 15 09:31:49 addons-154292 kubelet[1553]: I0115 09:31:49.465844    1553 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c36e21-9dad-41d4-810b-565c2d650e0c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "58c36e21-9dad-41d4-810b-565c2d650e0c" (UID: "58c36e21-9dad-41d4-810b-565c2d650e0c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 09:31:49 addons-154292 kubelet[1553]: I0115 09:31:49.465944    1553 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c36e21-9dad-41d4-810b-565c2d650e0c-kube-api-access-2lmw7" (OuterVolumeSpecName: "kube-api-access-2lmw7") pod "58c36e21-9dad-41d4-810b-565c2d650e0c" (UID: "58c36e21-9dad-41d4-810b-565c2d650e0c"). InnerVolumeSpecName "kube-api-access-2lmw7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 09:31:49 addons-154292 kubelet[1553]: I0115 09:31:49.564354    1553 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58c36e21-9dad-41d4-810b-565c2d650e0c-webhook-cert\") on node \"addons-154292\" DevicePath \"\""
	Jan 15 09:31:49 addons-154292 kubelet[1553]: I0115 09:31:49.564395    1553 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2lmw7\" (UniqueName: \"kubernetes.io/projected/58c36e21-9dad-41d4-810b-565c2d650e0c-kube-api-access-2lmw7\") on node \"addons-154292\" DevicePath \"\""
	Jan 15 09:31:49 addons-154292 kubelet[1553]: I0115 09:31:49.612229    1553 scope.go:117] "RemoveContainer" containerID="6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23"
	Jan 15 09:31:49 addons-154292 kubelet[1553]: I0115 09:31:49.627418    1553 scope.go:117] "RemoveContainer" containerID="6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23"
	Jan 15 09:31:49 addons-154292 kubelet[1553]: E0115 09:31:49.627762    1553 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23\": container with ID starting with 6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23 not found: ID does not exist" containerID="6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23"
	Jan 15 09:31:49 addons-154292 kubelet[1553]: I0115 09:31:49.627814    1553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23"} err="failed to get container status \"6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23\": rpc error: code = NotFound desc = could not find container \"6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23\": container with ID starting with 6fbc8b4ffcf017dcb081d392df3980368d3fa74ea4feb674735361391e9dbe23 not found: ID does not exist"
	Jan 15 09:31:50 addons-154292 kubelet[1553]: I0115 09:31:50.441781    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="58c36e21-9dad-41d4-810b-565c2d650e0c" path="/var/lib/kubelet/pods/58c36e21-9dad-41d4-810b-565c2d650e0c/volumes"
	
	
	==> storage-provisioner [25a899c15ab740d4c2b989b9f95db44f106093b902e6c4e51d8119a63414e865] <==
	I0115 09:28:15.969417       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 09:28:15.978306       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 09:28:15.978350       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 09:28:16.026119       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 09:28:16.026241       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-154292_02507b2e-04f7-4d85-a712-85e8b2d74b63!
	I0115 09:28:16.026285       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b08b786a-d788-44f2-bed5-984761ab1574", APIVersion:"v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-154292_02507b2e-04f7-4d85-a712-85e8b2d74b63 became leader
	I0115 09:28:16.126823       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-154292_02507b2e-04f7-4d85-a712-85e8b2d74b63!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-154292 -n addons-154292
helpers_test.go:261: (dbg) Run:  kubectl --context addons-154292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (176.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-865640 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-865640 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.709152087s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-865640 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-865640 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [20382dc2-5766-4fa6-ae14-71f8aa800360] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [20382dc2-5766-4fa6-ae14-71f8aa800360] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.003869273s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-865640 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0115 09:39:05.326356   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:39:33.010231   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-865640 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.683239295s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-865640 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-865640 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.008216191s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-865640 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-865640 addons disable ingress-dns --alsologtostderr -v=1: (1.79401926s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-865640 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-865640 addons disable ingress --alsologtostderr -v=1: (7.413187687s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-865640
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-865640:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43c5ddc1c8d004d1e3ba6541c7c47c19ea2e6e129ea9543f39038e35887d5dcc",
	        "Created": "2024-01-15T09:36:27.176576678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51490,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T09:36:27.43454951Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/43c5ddc1c8d004d1e3ba6541c7c47c19ea2e6e129ea9543f39038e35887d5dcc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43c5ddc1c8d004d1e3ba6541c7c47c19ea2e6e129ea9543f39038e35887d5dcc/hostname",
	        "HostsPath": "/var/lib/docker/containers/43c5ddc1c8d004d1e3ba6541c7c47c19ea2e6e129ea9543f39038e35887d5dcc/hosts",
	        "LogPath": "/var/lib/docker/containers/43c5ddc1c8d004d1e3ba6541c7c47c19ea2e6e129ea9543f39038e35887d5dcc/43c5ddc1c8d004d1e3ba6541c7c47c19ea2e6e129ea9543f39038e35887d5dcc-json.log",
	        "Name": "/ingress-addon-legacy-865640",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-865640:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-865640",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f923cd52727f71107ff05804542386ac219fa00d28b9dc0b45c50c23e5830572-init/diff:/var/lib/docker/overlay2/d9ef098e29db67903afbff93fb25a8f837156cdbfdd0e74ced52d24f8de7a26c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f923cd52727f71107ff05804542386ac219fa00d28b9dc0b45c50c23e5830572/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f923cd52727f71107ff05804542386ac219fa00d28b9dc0b45c50c23e5830572/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f923cd52727f71107ff05804542386ac219fa00d28b9dc0b45c50c23e5830572/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-865640",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-865640/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-865640",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-865640",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-865640",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6da0aa28b50edb2e4a76f58b01059f27387b400a5cae2d5cd6dea446749f3f2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e6da0aa28b50",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-865640": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "43c5ddc1c8d0",
	                        "ingress-addon-legacy-865640"
	                    ],
	                    "NetworkID": "95b429f2c6e91b715aba2e746833afd5b31b21a869718532a95b967c9477f096",
	                    "EndpointID": "0a9532ab6709f8e698fb6d7358c4661bd992b46e097ea25f7f09d82efa7d810e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-865640 -n ingress-addon-legacy-865640
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-865640 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-865640 logs -n 25: (1.118704282s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-945307 ssh sudo cat        | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | /usr/share/ca-certificates/11825.pem  |                             |         |         |                     |                     |
	| ssh            | functional-945307 ssh sudo cat        | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | /etc/test/nested/copy/11825/hosts     |                             |         |         |                     |                     |
	| ssh            | functional-945307 ssh sudo cat        | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | /etc/ssl/certs/51391683.0             |                             |         |         |                     |                     |
	| ssh            | functional-945307 ssh sudo cat        | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | /etc/ssl/certs/118252.pem             |                             |         |         |                     |                     |
	| ssh            | functional-945307 ssh sudo cat        | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | /usr/share/ca-certificates/118252.pem |                             |         |         |                     |                     |
	| ssh            | functional-945307 ssh sudo cat        | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0             |                             |         |         |                     |                     |
	| image          | functional-945307                     | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | image ls --format short               |                             |         |         |                     |                     |
	|                | --alsologtostderr                     |                             |         |         |                     |                     |
	| image          | functional-945307                     | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | image ls --format yaml                |                             |         |         |                     |                     |
	|                | --alsologtostderr                     |                             |         |         |                     |                     |
	| ssh            | functional-945307 ssh pgrep           | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC |                     |
	|                | buildkitd                             |                             |         |         |                     |                     |
	| image          | functional-945307 image build -t      | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | localhost/my-image:functional-945307  |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr      |                             |         |         |                     |                     |
	| service        | functional-945307 service             | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | hello-node-connect --url              |                             |         |         |                     |                     |
	| image          | functional-945307                     | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | image ls --format json                |                             |         |         |                     |                     |
	|                | --alsologtostderr                     |                             |         |         |                     |                     |
	| image          | functional-945307                     | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | image ls --format table               |                             |         |         |                     |                     |
	|                | --alsologtostderr                     |                             |         |         |                     |                     |
	| update-context | functional-945307                     | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | update-context                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                             |         |         |                     |                     |
	| update-context | functional-945307                     | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | update-context                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                             |         |         |                     |                     |
	| update-context | functional-945307                     | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:35 UTC |
	|                | update-context                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                             |         |         |                     |                     |
	| image          | functional-945307 image ls            | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:35 UTC | 15 Jan 24 09:36 UTC |
	| delete         | -p functional-945307                  | functional-945307           | jenkins | v1.32.0 | 15 Jan 24 09:36 UTC | 15 Jan 24 09:36 UTC |
	| start          | -p ingress-addon-legacy-865640        | ingress-addon-legacy-865640 | jenkins | v1.32.0 | 15 Jan 24 09:36 UTC | 15 Jan 24 09:37 UTC |
	|                | --kubernetes-version=v1.18.20         |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true             |                             |         |         |                     |                     |
	|                | --alsologtostderr                     |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                  |                             |         |         |                     |                     |
	|                | --container-runtime=crio              |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-865640           | ingress-addon-legacy-865640 | jenkins | v1.32.0 | 15 Jan 24 09:37 UTC | 15 Jan 24 09:37 UTC |
	|                | addons enable ingress                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-865640           | ingress-addon-legacy-865640 | jenkins | v1.32.0 | 15 Jan 24 09:37 UTC | 15 Jan 24 09:37 UTC |
	|                | addons enable ingress-dns             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-865640           | ingress-addon-legacy-865640 | jenkins | v1.32.0 | 15 Jan 24 09:37 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/         |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'          |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-865640 ip        | ingress-addon-legacy-865640 | jenkins | v1.32.0 | 15 Jan 24 09:39 UTC | 15 Jan 24 09:39 UTC |
	| addons         | ingress-addon-legacy-865640           | ingress-addon-legacy-865640 | jenkins | v1.32.0 | 15 Jan 24 09:40 UTC | 15 Jan 24 09:40 UTC |
	|                | addons disable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-865640           | ingress-addon-legacy-865640 | jenkins | v1.32.0 | 15 Jan 24 09:40 UTC | 15 Jan 24 09:40 UTC |
	|                | addons disable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                |                             |         |         |                     |                     |
	|----------------|---------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:36:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:36:15.408658   50889 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:36:15.408780   50889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:36:15.408789   50889 out.go:309] Setting ErrFile to fd 2...
	I0115 09:36:15.408793   50889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:36:15.409020   50889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	I0115 09:36:15.409610   50889 out.go:303] Setting JSON to false
	I0115 09:36:15.410594   50889 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1126,"bootTime":1705310250,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:36:15.410667   50889 start.go:138] virtualization: kvm guest
	I0115 09:36:15.413465   50889 out.go:177] * [ingress-addon-legacy-865640] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:36:15.415428   50889 notify.go:220] Checking for updates...
	I0115 09:36:15.417304   50889 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:36:15.419174   50889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:36:15.420928   50889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:36:15.422699   50889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	I0115 09:36:15.424381   50889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:36:15.426029   50889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:36:15.427917   50889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:36:15.449989   50889 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 09:36:15.450117   50889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:36:15.502714   50889 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-15 09:36:15.493460069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:36:15.502816   50889 docker.go:295] overlay module found
	I0115 09:36:15.506308   50889 out.go:177] * Using the docker driver based on user configuration
	I0115 09:36:15.508203   50889 start.go:298] selected driver: docker
	I0115 09:36:15.508225   50889 start.go:902] validating driver "docker" against <nil>
	I0115 09:36:15.508237   50889 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:36:15.509020   50889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:36:15.569754   50889 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-15 09:36:15.561658802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:36:15.569898   50889 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:36:15.570156   50889 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 09:36:15.572826   50889 out.go:177] * Using Docker driver with root privileges
	I0115 09:36:15.574690   50889 cni.go:84] Creating CNI manager for ""
	I0115 09:36:15.574710   50889 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 09:36:15.574723   50889 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 09:36:15.574735   50889 start_flags.go:321] config:
	{Name:ingress-addon-legacy-865640 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-865640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:36:15.577345   50889 out.go:177] * Starting control plane node ingress-addon-legacy-865640 in cluster ingress-addon-legacy-865640
	I0115 09:36:15.579962   50889 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 09:36:15.581828   50889 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 09:36:15.583499   50889 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 09:36:15.583629   50889 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 09:36:15.600646   50889 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 09:36:15.600685   50889 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 09:36:15.605080   50889 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0115 09:36:15.605126   50889 cache.go:56] Caching tarball of preloaded images
	I0115 09:36:15.605296   50889 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 09:36:15.607687   50889 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0115 09:36:15.609384   50889 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:36:15.633088   50889 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0115 09:36:18.893034   50889 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:36:18.893148   50889 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:36:19.902189   50889 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0115 09:36:19.902560   50889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/config.json ...
	I0115 09:36:19.902590   50889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/config.json: {Name:mka326be596268d7b8b2b61ff2278ce6fdd6a8e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:36:19.902756   50889 cache.go:194] Successfully downloaded all kic artifacts
	I0115 09:36:19.902785   50889 start.go:365] acquiring machines lock for ingress-addon-legacy-865640: {Name:mkc7985efb27d70c72c77d118689c37e5567535f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:36:19.902827   50889 start.go:369] acquired machines lock for "ingress-addon-legacy-865640" in 31.193µs
	I0115 09:36:19.902845   50889 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-865640 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-865640 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:36:19.902920   50889 start.go:125] createHost starting for "" (driver="docker")
	I0115 09:36:19.907140   50889 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0115 09:36:19.907363   50889 start.go:159] libmachine.API.Create for "ingress-addon-legacy-865640" (driver="docker")
	I0115 09:36:19.907387   50889 client.go:168] LocalClient.Create starting
	I0115 09:36:19.907479   50889 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem
	I0115 09:36:19.907514   50889 main.go:141] libmachine: Decoding PEM data...
	I0115 09:36:19.907530   50889 main.go:141] libmachine: Parsing certificate...
	I0115 09:36:19.907580   50889 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem
	I0115 09:36:19.907600   50889 main.go:141] libmachine: Decoding PEM data...
	I0115 09:36:19.907611   50889 main.go:141] libmachine: Parsing certificate...
	I0115 09:36:19.907923   50889 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-865640 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 09:36:19.923729   50889 cli_runner.go:211] docker network inspect ingress-addon-legacy-865640 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 09:36:19.923815   50889 network_create.go:281] running [docker network inspect ingress-addon-legacy-865640] to gather additional debugging logs...
	I0115 09:36:19.923835   50889 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-865640
	W0115 09:36:19.938918   50889 cli_runner.go:211] docker network inspect ingress-addon-legacy-865640 returned with exit code 1
	I0115 09:36:19.938952   50889 network_create.go:284] error running [docker network inspect ingress-addon-legacy-865640]: docker network inspect ingress-addon-legacy-865640: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-865640 not found
	I0115 09:36:19.938969   50889 network_create.go:286] output of [docker network inspect ingress-addon-legacy-865640]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-865640 not found
	
	** /stderr **
	I0115 09:36:19.939082   50889 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 09:36:19.954758   50889 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000781420}
	I0115 09:36:19.954821   50889 network_create.go:124] attempt to create docker network ingress-addon-legacy-865640 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0115 09:36:19.954881   50889 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-865640 ingress-addon-legacy-865640
	I0115 09:36:20.008826   50889 network_create.go:108] docker network ingress-addon-legacy-865640 192.168.49.0/24 created
	I0115 09:36:20.008862   50889 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-865640" container
	I0115 09:36:20.008926   50889 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 09:36:20.023833   50889 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-865640 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-865640 --label created_by.minikube.sigs.k8s.io=true
	I0115 09:36:20.041388   50889 oci.go:103] Successfully created a docker volume ingress-addon-legacy-865640
	I0115 09:36:20.041482   50889 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-865640-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-865640 --entrypoint /usr/bin/test -v ingress-addon-legacy-865640:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 09:36:21.762191   50889 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-865640-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-865640 --entrypoint /usr/bin/test -v ingress-addon-legacy-865640:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.720660497s)
	I0115 09:36:21.762226   50889 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-865640
	I0115 09:36:21.762244   50889 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 09:36:21.762266   50889 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 09:36:21.762330   50889 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-865640:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 09:36:27.107803   50889 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-865640:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.34542844s)
	I0115 09:36:27.107835   50889 kic.go:203] duration metric: took 5.345568 seconds to extract preloaded images to volume
	W0115 09:36:27.107955   50889 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 09:36:27.108069   50889 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 09:36:27.160611   50889 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-865640 --name ingress-addon-legacy-865640 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-865640 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-865640 --network ingress-addon-legacy-865640 --ip 192.168.49.2 --volume ingress-addon-legacy-865640:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 09:36:27.442836   50889 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865640 --format={{.State.Running}}
	I0115 09:36:27.461277   50889 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865640 --format={{.State.Status}}
	I0115 09:36:27.479867   50889 cli_runner.go:164] Run: docker exec ingress-addon-legacy-865640 stat /var/lib/dpkg/alternatives/iptables
	I0115 09:36:27.521360   50889 oci.go:144] the created container "ingress-addon-legacy-865640" has a running status.
	I0115 09:36:27.521399   50889 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/ingress-addon-legacy-865640/id_rsa...
	I0115 09:36:27.742173   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/ingress-addon-legacy-865640/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0115 09:36:27.742224   50889 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17953-3696/.minikube/machines/ingress-addon-legacy-865640/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 09:36:27.769693   50889 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865640 --format={{.State.Status}}
	I0115 09:36:27.789609   50889 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 09:36:27.789630   50889 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-865640 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 09:36:27.837048   50889 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865640 --format={{.State.Status}}
	I0115 09:36:27.855195   50889 machine.go:88] provisioning docker machine ...
	I0115 09:36:27.855231   50889 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-865640"
	I0115 09:36:27.855297   50889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865640
	I0115 09:36:27.882402   50889 main.go:141] libmachine: Using SSH client type: native
	I0115 09:36:27.882747   50889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0115 09:36:27.882758   50889 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-865640 && echo "ingress-addon-legacy-865640" | sudo tee /etc/hostname
	I0115 09:36:28.087493   50889 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-865640
	
	I0115 09:36:28.087576   50889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865640
	I0115 09:36:28.105636   50889 main.go:141] libmachine: Using SSH client type: native
	I0115 09:36:28.106014   50889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0115 09:36:28.106040   50889 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-865640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-865640/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-865640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 09:36:28.237072   50889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:36:28.237126   50889 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17953-3696/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-3696/.minikube}
	I0115 09:36:28.237150   50889 ubuntu.go:177] setting up certificates
	I0115 09:36:28.237163   50889 provision.go:83] configureAuth start
	I0115 09:36:28.237219   50889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-865640
	I0115 09:36:28.254989   50889 provision.go:138] copyHostCerts
	I0115 09:36:28.255033   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem
	I0115 09:36:28.255063   50889 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem, removing ...
	I0115 09:36:28.255084   50889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem
	I0115 09:36:28.255145   50889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem (1082 bytes)
	I0115 09:36:28.255230   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem
	I0115 09:36:28.255248   50889 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem, removing ...
	I0115 09:36:28.255252   50889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem
	I0115 09:36:28.255274   50889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem (1123 bytes)
	I0115 09:36:28.255329   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem
	I0115 09:36:28.255344   50889 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem, removing ...
	I0115 09:36:28.255350   50889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem
	I0115 09:36:28.255371   50889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem (1679 bytes)
	I0115 09:36:28.255428   50889 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-865640 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-865640]
	I0115 09:36:28.571926   50889 provision.go:172] copyRemoteCerts
	I0115 09:36:28.571998   50889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 09:36:28.572035   50889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865640
	I0115 09:36:28.588337   50889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/ingress-addon-legacy-865640/id_rsa Username:docker}
	I0115 09:36:28.681040   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 09:36:28.681143   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 09:36:28.702066   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 09:36:28.702131   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0115 09:36:28.722807   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 09:36:28.722867   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 09:36:28.743927   50889 provision.go:86] duration metric: configureAuth took 506.750257ms
	I0115 09:36:28.743971   50889 ubuntu.go:193] setting minikube options for container-runtime
	I0115 09:36:28.744147   50889 config.go:182] Loaded profile config "ingress-addon-legacy-865640": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0115 09:36:28.744267   50889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865640
	I0115 09:36:28.759971   50889 main.go:141] libmachine: Using SSH client type: native
	I0115 09:36:28.760312   50889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0115 09:36:28.760330   50889 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 09:36:29.001663   50889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 09:36:29.001695   50889 machine.go:91] provisioned docker machine in 1.146478123s
	I0115 09:36:29.001706   50889 client.go:171] LocalClient.Create took 9.094314154s
	I0115 09:36:29.001730   50889 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-865640" took 9.094369477s
	I0115 09:36:29.001738   50889 start.go:300] post-start starting for "ingress-addon-legacy-865640" (driver="docker")
	I0115 09:36:29.001748   50889 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 09:36:29.001805   50889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 09:36:29.001853   50889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865640
	I0115 09:36:29.018171   50889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/ingress-addon-legacy-865640/id_rsa Username:docker}
	I0115 09:36:29.113805   50889 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 09:36:29.116697   50889 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 09:36:29.116731   50889 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 09:36:29.116739   50889 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 09:36:29.116745   50889 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 09:36:29.116754   50889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-3696/.minikube/addons for local assets ...
	I0115 09:36:29.116811   50889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-3696/.minikube/files for local assets ...
	I0115 09:36:29.116888   50889 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem -> 118252.pem in /etc/ssl/certs
	I0115 09:36:29.116900   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem -> /etc/ssl/certs/118252.pem
	I0115 09:36:29.116984   50889 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 09:36:29.124322   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem --> /etc/ssl/certs/118252.pem (1708 bytes)
	I0115 09:36:29.145730   50889 start.go:303] post-start completed in 143.975868ms
	I0115 09:36:29.146111   50889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-865640
	I0115 09:36:29.162604   50889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/config.json ...
	I0115 09:36:29.162840   50889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 09:36:29.162893   50889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865640
	I0115 09:36:29.178370   50889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/ingress-addon-legacy-865640/id_rsa Username:docker}
	I0115 09:36:29.269731   50889 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 09:36:29.273650   50889 start.go:128] duration metric: createHost completed in 9.370715894s
	I0115 09:36:29.273676   50889 start.go:83] releasing machines lock for "ingress-addon-legacy-865640", held for 9.370836846s
	I0115 09:36:29.273735   50889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-865640
	I0115 09:36:29.289911   50889 ssh_runner.go:195] Run: cat /version.json
	I0115 09:36:29.289974   50889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865640
	I0115 09:36:29.289983   50889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 09:36:29.290058   50889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865640
	I0115 09:36:29.306233   50889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/ingress-addon-legacy-865640/id_rsa Username:docker}
	I0115 09:36:29.308215   50889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/ingress-addon-legacy-865640/id_rsa Username:docker}
	I0115 09:36:29.485212   50889 ssh_runner.go:195] Run: systemctl --version
	I0115 09:36:29.489290   50889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 09:36:29.624300   50889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 09:36:29.628399   50889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:36:29.645396   50889 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0115 09:36:29.645480   50889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:36:29.670842   50889 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 09:36:29.670886   50889 start.go:475] detecting cgroup driver to use...
	I0115 09:36:29.670917   50889 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 09:36:29.670954   50889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 09:36:29.684447   50889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 09:36:29.693929   50889 docker.go:217] disabling cri-docker service (if available) ...
	I0115 09:36:29.693973   50889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 09:36:29.706139   50889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 09:36:29.719165   50889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 09:36:29.790425   50889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 09:36:29.865868   50889 docker.go:233] disabling docker service ...
	I0115 09:36:29.865924   50889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 09:36:29.882813   50889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 09:36:29.893136   50889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 09:36:29.970868   50889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 09:36:30.058798   50889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 09:36:30.069029   50889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 09:36:30.083181   50889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0115 09:36:30.083262   50889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:36:30.091998   50889 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 09:36:30.092056   50889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:36:30.100614   50889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:36:30.109068   50889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:36:30.117449   50889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 09:36:30.125268   50889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 09:36:30.132399   50889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 09:36:30.139699   50889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 09:36:30.214539   50889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 09:36:30.319037   50889 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 09:36:30.319092   50889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 09:36:30.322462   50889 start.go:543] Will wait 60s for crictl version
	I0115 09:36:30.322515   50889 ssh_runner.go:195] Run: which crictl
	I0115 09:36:30.325649   50889 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 09:36:30.358262   50889 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0115 09:36:30.358352   50889 ssh_runner.go:195] Run: crio --version
	I0115 09:36:30.391768   50889 ssh_runner.go:195] Run: crio --version
	I0115 09:36:30.426459   50889 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0115 09:36:30.428279   50889 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-865640 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 09:36:30.444442   50889 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0115 09:36:30.448035   50889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:36:30.458035   50889 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 09:36:30.458097   50889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:36:30.501356   50889 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0115 09:36:30.501415   50889 ssh_runner.go:195] Run: which lz4
	I0115 09:36:30.504643   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0115 09:36:30.504740   50889 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 09:36:30.507804   50889 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 09:36:30.507832   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0115 09:36:31.457808   50889 crio.go:444] Took 0.953092 seconds to copy over tarball
	I0115 09:36:31.457877   50889 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 09:36:33.729992   50889 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.272089084s)
	I0115 09:36:33.730022   50889 crio.go:451] Took 2.272188 seconds to extract the tarball
	I0115 09:36:33.730034   50889 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 09:36:33.798527   50889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:36:33.829804   50889 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0115 09:36:33.829828   50889 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 09:36:33.829890   50889 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 09:36:33.829920   50889 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0115 09:36:33.829932   50889 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0115 09:36:33.829956   50889 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 09:36:33.829962   50889 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 09:36:33.829973   50889 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 09:36:33.829887   50889 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:36:33.829931   50889 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0115 09:36:33.831005   50889 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:36:33.831020   50889 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 09:36:33.831035   50889 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0115 09:36:33.831009   50889 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 09:36:33.831044   50889 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 09:36:33.831121   50889 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 09:36:33.831005   50889 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0115 09:36:33.831011   50889 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0115 09:36:33.982934   50889 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0115 09:36:33.991081   50889 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0115 09:36:34.019246   50889 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0115 09:36:34.019293   50889 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 09:36:34.019337   50889 ssh_runner.go:195] Run: which crictl
	I0115 09:36:34.027984   50889 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0115 09:36:34.028028   50889 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 09:36:34.028034   50889 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0115 09:36:34.028057   50889 ssh_runner.go:195] Run: which crictl
	I0115 09:36:34.034453   50889 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0115 09:36:34.049936   50889 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0115 09:36:34.053213   50889 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0115 09:36:34.062576   50889 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-3696/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0115 09:36:34.062657   50889 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0115 09:36:34.075021   50889 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 09:36:34.076258   50889 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0115 09:36:34.076365   50889 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0115 09:36:34.076411   50889 ssh_runner.go:195] Run: which crictl
	I0115 09:36:34.127259   50889 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:36:34.128060   50889 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0115 09:36:34.135305   50889 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0115 09:36:34.135357   50889 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 09:36:34.135413   50889 ssh_runner.go:195] Run: which crictl
	I0115 09:36:34.144714   50889 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0115 09:36:34.144777   50889 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0115 09:36:34.144811   50889 ssh_runner.go:195] Run: which crictl
	I0115 09:36:34.148744   50889 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-3696/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0115 09:36:34.227673   50889 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0115 09:36:34.227735   50889 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 09:36:34.227789   50889 ssh_runner.go:195] Run: which crictl
	I0115 09:36:34.227809   50889 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0115 09:36:34.338082   50889 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0115 09:36:34.338124   50889 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0115 09:36:34.338155   50889 ssh_runner.go:195] Run: which crictl
	I0115 09:36:34.338162   50889 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0115 09:36:34.338166   50889 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0115 09:36:34.338244   50889 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-3696/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0115 09:36:34.338249   50889 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 09:36:34.376862   50889 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-3696/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0115 09:36:34.376894   50889 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-3696/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0115 09:36:34.376914   50889 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0115 09:36:34.376960   50889 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-3696/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0115 09:36:34.407170   50889 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-3696/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0115 09:36:34.407231   50889 cache_images.go:92] LoadImages completed in 577.39035ms
	W0115 09:36:34.407305   50889 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17953-3696/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0115 09:36:34.407361   50889 ssh_runner.go:195] Run: crio config
	I0115 09:36:34.460761   50889 cni.go:84] Creating CNI manager for ""
	I0115 09:36:34.460782   50889 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 09:36:34.460801   50889 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 09:36:34.460818   50889 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-865640 NodeName:ingress-addon-legacy-865640 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0115 09:36:34.460948   50889 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-865640"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 09:36:34.461014   50889 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-865640 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-865640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 09:36:34.461061   50889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0115 09:36:34.468987   50889 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 09:36:34.469062   50889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 09:36:34.476809   50889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0115 09:36:34.492392   50889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0115 09:36:34.508835   50889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0115 09:36:34.524659   50889 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0115 09:36:34.527932   50889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:36:34.537371   50889 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640 for IP: 192.168.49.2
	I0115 09:36:34.537408   50889 certs.go:190] acquiring lock for shared ca certs: {Name:mk436e7b36fef987bcfd7cb65df7b354c02b1a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:36:34.537560   50889 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key
	I0115 09:36:34.537698   50889 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key
	I0115 09:36:34.537814   50889 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.key
	I0115 09:36:34.537839   50889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt with IP's: []
	I0115 09:36:34.769631   50889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt ...
	I0115 09:36:34.769667   50889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: {Name:mk91a6096b7892e10cbf0fa6494aa041f08ef832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:36:34.769861   50889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.key ...
	I0115 09:36:34.769890   50889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.key: {Name:mk8b054d329041125abb2f27afd579abb50f0c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:36:34.769991   50889 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.key.dd3b5fb2
	I0115 09:36:34.770017   50889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 09:36:34.868066   50889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.crt.dd3b5fb2 ...
	I0115 09:36:34.868103   50889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.crt.dd3b5fb2: {Name:mk97b18fa02302b5e39dfa71ebc40b9565b4f8c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:36:34.868289   50889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.key.dd3b5fb2 ...
	I0115 09:36:34.868308   50889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.key.dd3b5fb2: {Name:mk3dc29ecc96579882eca553ea518e37b9b3a35e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:36:34.868404   50889 certs.go:337] copying /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.crt
	I0115 09:36:34.868506   50889 certs.go:341] copying /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.key
	I0115 09:36:34.868593   50889 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/proxy-client.key
	I0115 09:36:34.868613   50889 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/proxy-client.crt with IP's: []
	I0115 09:36:34.976061   50889 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/proxy-client.crt ...
	I0115 09:36:34.976105   50889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/proxy-client.crt: {Name:mk94f6de39172f8f349197e7137047bd61470d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:36:34.976327   50889 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/proxy-client.key ...
	I0115 09:36:34.976349   50889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/proxy-client.key: {Name:mka837a014bcc7ffeea202acbe220d0df28d4e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:36:34.976472   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 09:36:34.976502   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 09:36:34.976521   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 09:36:34.976539   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 09:36:34.976561   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 09:36:34.976580   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 09:36:34.976599   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 09:36:34.976622   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 09:36:34.976698   50889 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825.pem (1338 bytes)
	W0115 09:36:34.976755   50889 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825_empty.pem, impossibly tiny 0 bytes
	I0115 09:36:34.976777   50889 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem (1675 bytes)
	I0115 09:36:34.976834   50889 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem (1082 bytes)
	I0115 09:36:34.976880   50889 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem (1123 bytes)
	I0115 09:36:34.976924   50889 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem (1679 bytes)
	I0115 09:36:34.976993   50889 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem (1708 bytes)
	I0115 09:36:34.977041   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825.pem -> /usr/share/ca-certificates/11825.pem
	I0115 09:36:34.977065   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem -> /usr/share/ca-certificates/118252.pem
	I0115 09:36:34.977087   50889 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:36:34.977778   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 09:36:34.998984   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 09:36:35.020140   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 09:36:35.041001   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 09:36:35.061541   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 09:36:35.082031   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 09:36:35.102737   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 09:36:35.123405   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 09:36:35.143913   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825.pem --> /usr/share/ca-certificates/11825.pem (1338 bytes)
	I0115 09:36:35.164936   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem --> /usr/share/ca-certificates/118252.pem (1708 bytes)
	I0115 09:36:35.185834   50889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 09:36:35.206947   50889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 09:36:35.222951   50889 ssh_runner.go:195] Run: openssl version
	I0115 09:36:35.227998   50889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118252.pem && ln -fs /usr/share/ca-certificates/118252.pem /etc/ssl/certs/118252.pem"
	I0115 09:36:35.236604   50889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118252.pem
	I0115 09:36:35.239940   50889 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:33 /usr/share/ca-certificates/118252.pem
	I0115 09:36:35.240005   50889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118252.pem
	I0115 09:36:35.246412   50889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118252.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 09:36:35.255012   50889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 09:36:35.263552   50889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:36:35.266716   50889 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:36:35.266770   50889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:36:35.272937   50889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 09:36:35.281375   50889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11825.pem && ln -fs /usr/share/ca-certificates/11825.pem /etc/ssl/certs/11825.pem"
	I0115 09:36:35.289381   50889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11825.pem
	I0115 09:36:35.292391   50889 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:33 /usr/share/ca-certificates/11825.pem
	I0115 09:36:35.292448   50889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11825.pem
	I0115 09:36:35.298590   50889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11825.pem /etc/ssl/certs/51391683.0"
	I0115 09:36:35.306788   50889 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 09:36:35.309782   50889 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:36:35.309833   50889 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-865640 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-865640 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:36:35.309924   50889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 09:36:35.309984   50889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 09:36:35.342635   50889 cri.go:89] found id: ""
	I0115 09:36:35.342727   50889 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 09:36:35.350983   50889 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 09:36:35.358925   50889 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 09:36:35.358973   50889 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 09:36:35.366938   50889 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 09:36:35.366974   50889 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 09:36:35.409263   50889 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0115 09:36:35.412088   50889 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 09:36:35.447208   50889 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0115 09:36:35.447323   50889 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-gcp
	I0115 09:36:35.447385   50889 kubeadm.go:322] OS: Linux
	I0115 09:36:35.447482   50889 kubeadm.go:322] CGROUPS_CPU: enabled
	I0115 09:36:35.447587   50889 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0115 09:36:35.447677   50889 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0115 09:36:35.447770   50889 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0115 09:36:35.447864   50889 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0115 09:36:35.447978   50889 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0115 09:36:35.515634   50889 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 09:36:35.515763   50889 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 09:36:35.515893   50889 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 09:36:35.695776   50889 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 09:36:35.696789   50889 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 09:36:35.696879   50889 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 09:36:35.769071   50889 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 09:36:35.771471   50889 out.go:204]   - Generating certificates and keys ...
	I0115 09:36:35.771584   50889 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 09:36:35.771676   50889 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 09:36:35.990252   50889 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 09:36:36.152572   50889 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 09:36:36.437897   50889 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 09:36:36.668299   50889 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 09:36:36.855130   50889 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 09:36:36.855299   50889 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-865640 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 09:36:36.990488   50889 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 09:36:36.990623   50889 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-865640 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 09:36:37.167221   50889 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 09:36:37.306922   50889 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 09:36:37.664280   50889 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 09:36:37.664391   50889 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 09:36:37.775619   50889 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 09:36:37.896803   50889 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 09:36:38.149450   50889 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 09:36:38.231521   50889 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 09:36:38.232062   50889 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 09:36:38.234288   50889 out.go:204]   - Booting up control plane ...
	I0115 09:36:38.234395   50889 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 09:36:38.238538   50889 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 09:36:38.239447   50889 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 09:36:38.240040   50889 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 09:36:38.242029   50889 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 09:36:44.744698   50889 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502529 seconds
	I0115 09:36:44.744880   50889 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 09:36:44.757157   50889 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 09:36:45.276190   50889 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 09:36:45.276409   50889 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-865640 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0115 09:36:45.782934   50889 kubeadm.go:322] [bootstrap-token] Using token: vgpzcl.tulxjgw7375h0f32
	I0115 09:36:45.784894   50889 out.go:204]   - Configuring RBAC rules ...
	I0115 09:36:45.785001   50889 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 09:36:45.788001   50889 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 09:36:45.793859   50889 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 09:36:45.795652   50889 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 09:36:45.797512   50889 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 09:36:45.799188   50889 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 09:36:45.806641   50889 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 09:36:46.032353   50889 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 09:36:46.198659   50889 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 09:36:46.199885   50889 kubeadm.go:322] 
	I0115 09:36:46.199998   50889 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 09:36:46.200018   50889 kubeadm.go:322] 
	I0115 09:36:46.200141   50889 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 09:36:46.200157   50889 kubeadm.go:322] 
	I0115 09:36:46.200187   50889 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 09:36:46.200297   50889 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 09:36:46.200387   50889 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 09:36:46.200399   50889 kubeadm.go:322] 
	I0115 09:36:46.200475   50889 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 09:36:46.200579   50889 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 09:36:46.200672   50889 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 09:36:46.200686   50889 kubeadm.go:322] 
	I0115 09:36:46.200804   50889 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 09:36:46.200924   50889 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 09:36:46.200954   50889 kubeadm.go:322] 
	I0115 09:36:46.201086   50889 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vgpzcl.tulxjgw7375h0f32 \
	I0115 09:36:46.201275   50889 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 \
	I0115 09:36:46.201318   50889 kubeadm.go:322]     --control-plane 
	I0115 09:36:46.201328   50889 kubeadm.go:322] 
	I0115 09:36:46.201452   50889 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 09:36:46.201465   50889 kubeadm.go:322] 
	I0115 09:36:46.201585   50889 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vgpzcl.tulxjgw7375h0f32 \
	I0115 09:36:46.201717   50889 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 
	I0115 09:36:46.202962   50889 kubeadm.go:322] W0115 09:36:35.408765    1384 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0115 09:36:46.203211   50889 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0115 09:36:46.203361   50889 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 09:36:46.203554   50889 kubeadm.go:322] W0115 09:36:38.238308    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 09:36:46.203746   50889 kubeadm.go:322] W0115 09:36:38.239289    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 09:36:46.203762   50889 cni.go:84] Creating CNI manager for ""
	I0115 09:36:46.203771   50889 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 09:36:46.206605   50889 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 09:36:46.207950   50889 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 09:36:46.211617   50889 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0115 09:36:46.211633   50889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 09:36:46.227888   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 09:36:46.670242   50889 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 09:36:46.670297   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:46.670324   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=ingress-addon-legacy-865640 minikube.k8s.io/updated_at=2024_01_15T09_36_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:46.765057   50889 ops.go:34] apiserver oom_adj: -16
	I0115 09:36:46.765226   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:47.266329   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:47.766344   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:48.266293   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:48.766322   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:49.266283   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:49.766304   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:50.266299   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:50.765423   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:51.266195   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:51.765473   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:52.266123   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:52.766010   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:53.266060   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:53.765941   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:54.266266   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:54.766034   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:55.266306   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:55.765901   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:56.266113   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:56.765766   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:57.266334   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:57.766240   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:58.266286   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:58.765875   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:59.265716   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:36:59.765710   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:37:00.266260   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:37:00.766297   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:37:01.265887   50889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:37:01.340604   50889 kubeadm.go:1088] duration metric: took 14.670359114s to wait for elevateKubeSystemPrivileges.
	I0115 09:37:01.340639   50889 kubeadm.go:406] StartCluster complete in 26.030808638s
	I0115 09:37:01.340665   50889 settings.go:142] acquiring lock: {Name:mkbf6aded3b549fa4f3ab1cad294a9ebed536616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:37:01.340739   50889 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:37:01.341489   50889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/kubeconfig: {Name:mk31241d29ab70870dc379ecd59996acb9413d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:37:01.341743   50889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 09:37:01.341797   50889 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 09:37:01.341871   50889 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-865640"
	I0115 09:37:01.341895   50889 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-865640"
	I0115 09:37:01.341952   50889 host.go:66] Checking if "ingress-addon-legacy-865640" exists ...
	I0115 09:37:01.342029   50889 config.go:182] Loaded profile config "ingress-addon-legacy-865640": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0115 09:37:01.342087   50889 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-865640"
	I0115 09:37:01.342104   50889 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-865640"
	I0115 09:37:01.342391   50889 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865640 --format={{.State.Status}}
	I0115 09:37:01.342488   50889 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865640 --format={{.State.Status}}
	I0115 09:37:01.342433   50889 kapi.go:59] client config for ingress-addon-legacy-865640: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.key", CAFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:37:01.343457   50889 cert_rotation.go:137] Starting client certificate rotation controller
	I0115 09:37:01.364379   50889 kapi.go:59] client config for ingress-addon-legacy-865640: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.key", CAFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:37:01.364654   50889 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-865640"
	I0115 09:37:01.364681   50889 host.go:66] Checking if "ingress-addon-legacy-865640" exists ...
	I0115 09:37:01.365018   50889 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865640 --format={{.State.Status}}
	I0115 09:37:01.370237   50889 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:37:01.371877   50889 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:37:01.371900   50889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 09:37:01.371964   50889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865640
	I0115 09:37:01.382869   50889 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 09:37:01.382890   50889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 09:37:01.382935   50889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865640
	I0115 09:37:01.393799   50889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/ingress-addon-legacy-865640/id_rsa Username:docker}
	I0115 09:37:01.401199   50889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/ingress-addon-legacy-865640/id_rsa Username:docker}
	I0115 09:37:01.428053   50889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 09:37:01.550713   50889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 09:37:01.626608   50889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:37:01.763078   50889 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0115 09:37:01.845715   50889 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-865640" context rescaled to 1 replicas
	I0115 09:37:01.845763   50889 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:37:01.847647   50889 out.go:177] * Verifying Kubernetes components...
	I0115 09:37:01.849329   50889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:37:02.160193   50889 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0115 09:37:02.159171   50889 kapi.go:59] client config for ingress-addon-legacy-865640: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.key", CAFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:37:02.226836   50889 addons.go:505] enable addons completed in 885.030807ms: enabled=[default-storageclass storage-provisioner]
	I0115 09:37:02.227148   50889 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-865640" to be "Ready" ...
	I0115 09:37:04.230588   50889 node_ready.go:58] node "ingress-addon-legacy-865640" has status "Ready":"False"
	I0115 09:37:06.745893   50889 node_ready.go:49] node "ingress-addon-legacy-865640" has status "Ready":"True"
	I0115 09:37:06.745918   50889 node_ready.go:38] duration metric: took 4.518744082s waiting for node "ingress-addon-legacy-865640" to be "Ready" ...
	I0115 09:37:06.745928   50889 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:37:06.773592   50889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-f8dgp" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:08.777031   50889 pod_ready.go:102] pod "coredns-66bff467f8-f8dgp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-15 09:37:01 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0115 09:37:11.276983   50889 pod_ready.go:102] pod "coredns-66bff467f8-f8dgp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-15 09:37:01 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0115 09:37:13.279170   50889 pod_ready.go:102] pod "coredns-66bff467f8-f8dgp" in "kube-system" namespace has status "Ready":"False"
	I0115 09:37:15.279366   50889 pod_ready.go:102] pod "coredns-66bff467f8-f8dgp" in "kube-system" namespace has status "Ready":"False"
	I0115 09:37:17.779342   50889 pod_ready.go:102] pod "coredns-66bff467f8-f8dgp" in "kube-system" namespace has status "Ready":"False"
	I0115 09:37:19.279669   50889 pod_ready.go:92] pod "coredns-66bff467f8-f8dgp" in "kube-system" namespace has status "Ready":"True"
	I0115 09:37:19.279696   50889 pod_ready.go:81] duration metric: took 12.506073399s waiting for pod "coredns-66bff467f8-f8dgp" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.279707   50889 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-865640" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.283697   50889 pod_ready.go:92] pod "etcd-ingress-addon-legacy-865640" in "kube-system" namespace has status "Ready":"True"
	I0115 09:37:19.283726   50889 pod_ready.go:81] duration metric: took 4.00609ms waiting for pod "etcd-ingress-addon-legacy-865640" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.283758   50889 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-865640" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.287752   50889 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-865640" in "kube-system" namespace has status "Ready":"True"
	I0115 09:37:19.287774   50889 pod_ready.go:81] duration metric: took 4.008435ms waiting for pod "kube-apiserver-ingress-addon-legacy-865640" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.287784   50889 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-865640" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.291664   50889 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-865640" in "kube-system" namespace has status "Ready":"True"
	I0115 09:37:19.291685   50889 pod_ready.go:81] duration metric: took 3.894811ms waiting for pod "kube-controller-manager-ingress-addon-legacy-865640" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.291698   50889 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mxbgl" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.295807   50889 pod_ready.go:92] pod "kube-proxy-mxbgl" in "kube-system" namespace has status "Ready":"True"
	I0115 09:37:19.295827   50889 pod_ready.go:81] duration metric: took 4.123207ms waiting for pod "kube-proxy-mxbgl" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.295837   50889 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-865640" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.475255   50889 request.go:629] Waited for 179.353079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-865640
	I0115 09:37:19.675359   50889 request.go:629] Waited for 197.354331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-865640
	I0115 09:37:19.678094   50889 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-865640" in "kube-system" namespace has status "Ready":"True"
	I0115 09:37:19.678117   50889 pod_ready.go:81] duration metric: took 382.274399ms waiting for pod "kube-scheduler-ingress-addon-legacy-865640" in "kube-system" namespace to be "Ready" ...
	I0115 09:37:19.678129   50889 pod_ready.go:38] duration metric: took 12.932191421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:37:19.678149   50889 api_server.go:52] waiting for apiserver process to appear ...
	I0115 09:37:19.678206   50889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 09:37:19.689018   50889 api_server.go:72] duration metric: took 17.843193138s to wait for apiserver process to appear ...
	I0115 09:37:19.689042   50889 api_server.go:88] waiting for apiserver healthz status ...
	I0115 09:37:19.689066   50889 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0115 09:37:19.693932   50889 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0115 09:37:19.694743   50889 api_server.go:141] control plane version: v1.18.20
	I0115 09:37:19.694796   50889 api_server.go:131] duration metric: took 5.746995ms to wait for apiserver health ...
	I0115 09:37:19.694806   50889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 09:37:19.875212   50889 request.go:629] Waited for 180.338959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0115 09:37:19.880481   50889 system_pods.go:59] 8 kube-system pods found
	I0115 09:37:19.880525   50889 system_pods.go:61] "coredns-66bff467f8-f8dgp" [67217e4e-c8e3-411b-82e9-513ea3b3b0af] Running
	I0115 09:37:19.880531   50889 system_pods.go:61] "etcd-ingress-addon-legacy-865640" [45813e1e-14a2-44f1-aaaf-d680042902f2] Running
	I0115 09:37:19.880539   50889 system_pods.go:61] "kindnet-4fklw" [653f2d5d-8eca-401a-9720-996f99a7981b] Running
	I0115 09:37:19.880545   50889 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-865640" [d532e527-2adc-4e41-8956-d7b6d81e8a58] Running
	I0115 09:37:19.880550   50889 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-865640" [1f42c6bf-480d-4794-957f-e28fd140ad4a] Running
	I0115 09:37:19.880556   50889 system_pods.go:61] "kube-proxy-mxbgl" [7af15d34-c4b6-4963-86f8-fac20cfa34d7] Running
	I0115 09:37:19.880562   50889 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-865640" [138b379a-1315-4fa2-812f-b1433ab32854] Running
	I0115 09:37:19.880568   50889 system_pods.go:61] "storage-provisioner" [25b0f221-1049-4810-b9b0-ff0f870f1866] Running
	I0115 09:37:19.880580   50889 system_pods.go:74] duration metric: took 185.767573ms to wait for pod list to return data ...
	I0115 09:37:19.880595   50889 default_sa.go:34] waiting for default service account to be created ...
	I0115 09:37:20.074941   50889 request.go:629] Waited for 194.274205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0115 09:37:20.077333   50889 default_sa.go:45] found service account: "default"
	I0115 09:37:20.077358   50889 default_sa.go:55] duration metric: took 196.757562ms for default service account to be created ...
	I0115 09:37:20.077367   50889 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 09:37:20.275817   50889 request.go:629] Waited for 198.352233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0115 09:37:20.281145   50889 system_pods.go:86] 8 kube-system pods found
	I0115 09:37:20.281183   50889 system_pods.go:89] "coredns-66bff467f8-f8dgp" [67217e4e-c8e3-411b-82e9-513ea3b3b0af] Running
	I0115 09:37:20.281190   50889 system_pods.go:89] "etcd-ingress-addon-legacy-865640" [45813e1e-14a2-44f1-aaaf-d680042902f2] Running
	I0115 09:37:20.281197   50889 system_pods.go:89] "kindnet-4fklw" [653f2d5d-8eca-401a-9720-996f99a7981b] Running
	I0115 09:37:20.281204   50889 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-865640" [d532e527-2adc-4e41-8956-d7b6d81e8a58] Running
	I0115 09:37:20.281212   50889 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-865640" [1f42c6bf-480d-4794-957f-e28fd140ad4a] Running
	I0115 09:37:20.281218   50889 system_pods.go:89] "kube-proxy-mxbgl" [7af15d34-c4b6-4963-86f8-fac20cfa34d7] Running
	I0115 09:37:20.281225   50889 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-865640" [138b379a-1315-4fa2-812f-b1433ab32854] Running
	I0115 09:37:20.281235   50889 system_pods.go:89] "storage-provisioner" [25b0f221-1049-4810-b9b0-ff0f870f1866] Running
	I0115 09:37:20.281244   50889 system_pods.go:126] duration metric: took 203.870665ms to wait for k8s-apps to be running ...
	I0115 09:37:20.281259   50889 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 09:37:20.281309   50889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:37:20.291841   50889 system_svc.go:56] duration metric: took 10.573515ms WaitForService to wait for kubelet.
	I0115 09:37:20.291871   50889 kubeadm.go:581] duration metric: took 18.446051515s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 09:37:20.291896   50889 node_conditions.go:102] verifying NodePressure condition ...
	I0115 09:37:20.475441   50889 request.go:629] Waited for 183.464255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0115 09:37:20.478512   50889 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0115 09:37:20.478542   50889 node_conditions.go:123] node cpu capacity is 8
	I0115 09:37:20.478553   50889 node_conditions.go:105] duration metric: took 186.651801ms to run NodePressure ...
	I0115 09:37:20.478563   50889 start.go:228] waiting for startup goroutines ...
	I0115 09:37:20.478571   50889 start.go:233] waiting for cluster config update ...
	I0115 09:37:20.478581   50889 start.go:242] writing updated cluster config ...
	I0115 09:37:20.478833   50889 ssh_runner.go:195] Run: rm -f paused
	I0115 09:37:20.524059   50889 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0115 09:37:20.526358   50889 out.go:177] 
	W0115 09:37:20.527859   50889 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0115 09:37:20.529250   50889 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0115 09:37:20.530785   50889 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-865640" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 15 09:40:01 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:01.253571754Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-h22rg/hello-world-app" id=f0baa42b-f8f0-4437-9371-aec4c7dac659 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 15 09:40:01 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:01.253715419Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 15 09:40:01 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:01.323117127Z" level=info msg="Created container 30a376f4d32e3d1d625572614a98a0fce880651154a14b2b6f0d0a09dcf0fb9f: default/hello-world-app-5f5d8b66bb-h22rg/hello-world-app" id=f0baa42b-f8f0-4437-9371-aec4c7dac659 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 15 09:40:01 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:01.323710178Z" level=info msg="Starting container: 30a376f4d32e3d1d625572614a98a0fce880651154a14b2b6f0d0a09dcf0fb9f" id=1ba3ca49-8cd8-4022-b692-cb06ab355118 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jan 15 09:40:01 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:01.331250257Z" level=info msg="Started container" PID=4868 containerID=30a376f4d32e3d1d625572614a98a0fce880651154a14b2b6f0d0a09dcf0fb9f description=default/hello-world-app-5f5d8b66bb-h22rg/hello-world-app id=1ba3ca49-8cd8-4022-b692-cb06ab355118 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=2ac3f86cd88be1f74962787ef59cf124bdaa689c7fc6cb5424fd33879bb5e4df
	Jan 15 09:40:07 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:07.382100768Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=b8e0cc45-6dbb-4dbd-9bfa-a7c005b79378 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 15 09:40:16 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:16.381843674Z" level=info msg="Stopping pod sandbox: 6997c7e9abfa6e584a85aa2d9ed7e55b223567c3c76b2e523171a3c958ca69d8" id=21e20ceb-a342-44d0-af06-b84b809119ce name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 09:40:16 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:16.382927566Z" level=info msg="Stopped pod sandbox: 6997c7e9abfa6e584a85aa2d9ed7e55b223567c3c76b2e523171a3c958ca69d8" id=21e20ceb-a342-44d0-af06-b84b809119ce name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 09:40:17 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:17.168853380Z" level=info msg="Stopping container: a4f713bc49f593e2d4102fd9da8229381d965583a8be037c1a8df5c74db1e356 (timeout: 2s)" id=d702de85-9648-4a84-8182-5acfc3d9c36a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 15 09:40:17 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:17.170096042Z" level=info msg="Stopping container: a4f713bc49f593e2d4102fd9da8229381d965583a8be037c1a8df5c74db1e356 (timeout: 2s)" id=dd14b64e-621f-4824-9cab-de95e6508936 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.176812467Z" level=warning msg="Stopping container a4f713bc49f593e2d4102fd9da8229381d965583a8be037c1a8df5c74db1e356 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=d702de85-9648-4a84-8182-5acfc3d9c36a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 15 09:40:19 ingress-addon-legacy-865640 conmon[3412]: conmon a4f713bc49f593e2d410 <ninfo>: container 3424 exited with status 137
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.324572023Z" level=info msg="Stopped container a4f713bc49f593e2d4102fd9da8229381d965583a8be037c1a8df5c74db1e356: ingress-nginx/ingress-nginx-controller-7fcf777cb7-2skkf/controller" id=dd14b64e-621f-4824-9cab-de95e6508936 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.324559590Z" level=info msg="Stopped container a4f713bc49f593e2d4102fd9da8229381d965583a8be037c1a8df5c74db1e356: ingress-nginx/ingress-nginx-controller-7fcf777cb7-2skkf/controller" id=d702de85-9648-4a84-8182-5acfc3d9c36a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.325307453Z" level=info msg="Stopping pod sandbox: 3e3d3348301f06cbef5b76b491640200c2e4fd398c8d3d380888dd033183efeb" id=86529133-07ea-4352-b441-169910d82d91 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.325319809Z" level=info msg="Stopping pod sandbox: 3e3d3348301f06cbef5b76b491640200c2e4fd398c8d3d380888dd033183efeb" id=796bd249-0a48-40c9-9ec4-a21f57d84306 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.328242187Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-RJ7REFSNUKENVDSL - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-ZSKSFNNRGWN4R2OI - [0:0]\n-X KUBE-HP-RJ7REFSNUKENVDSL\n-X KUBE-HP-ZSKSFNNRGWN4R2OI\nCOMMIT\n"
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.329667750Z" level=info msg="Closing host port tcp:80"
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.329713485Z" level=info msg="Closing host port tcp:443"
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.330717744Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.330735294Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.330861355Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-2skkf Namespace:ingress-nginx ID:3e3d3348301f06cbef5b76b491640200c2e4fd398c8d3d380888dd033183efeb UID:65bb0b4e-19ee-4591-94d0-3703921f794f NetNS:/var/run/netns/9ca476ca-6160-4af4-bb8e-2501362f44d9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.330977386Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-2skkf from CNI network \"kindnet\" (type=ptp)"
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.358513468Z" level=info msg="Stopped pod sandbox: 3e3d3348301f06cbef5b76b491640200c2e4fd398c8d3d380888dd033183efeb" id=86529133-07ea-4352-b441-169910d82d91 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 09:40:19 ingress-addon-legacy-865640 crio[954]: time="2024-01-15 09:40:19.358625822Z" level=info msg="Stopped pod sandbox (already stopped): 3e3d3348301f06cbef5b76b491640200c2e4fd398c8d3d380888dd033183efeb" id=796bd249-0a48-40c9-9ec4-a21f57d84306 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	30a376f4d32e3       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            23 seconds ago      Running             hello-world-app           0                   2ac3f86cd88be       hello-world-app-5f5d8b66bb-h22rg
	79c352953e4f2       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   e90f257e7a648       nginx
	a4f713bc49f59       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   3e3d3348301f0       ingress-nginx-controller-7fcf777cb7-2skkf
	c038c7b587d71       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   f69e021c21ef2       ingress-nginx-admission-patch-c5qdg
	f04d8049cfb9e       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   42c990cee9cf9       ingress-nginx-admission-create-28f56
	82eab0a0ee3d1       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   cf033690a3c70       coredns-66bff467f8-f8dgp
	ead6b1b91e365       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   631915ef3806a       storage-provisioner
	067ab99fbbae8       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   c2d27c2bccbfe       kindnet-4fklw
	40c3bac44abdb       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   f39e7f6914cab       kube-proxy-mxbgl
	1c23921ad6720       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   f666e30b8167e       etcd-ingress-addon-legacy-865640
	cb71f3718b0e5       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   2b6b07c8ba6da       kube-controller-manager-ingress-addon-legacy-865640
	0115ed241411b       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   ee7f26cbc5945       kube-scheduler-ingress-addon-legacy-865640
	4f6e0c6d2cdea       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   e0775b17b7ec1       kube-apiserver-ingress-addon-legacy-865640
	
	
	==> coredns [82eab0a0ee3d1ea1c618427793bd6b6b866a3e8c0ea98c72f1f9c72acbcd5731] <==
	[INFO] 10.244.0.5:39366 - 50375 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004986234s
	[INFO] 10.244.0.5:39366 - 32194 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004359469s
	[INFO] 10.244.0.5:47943 - 34634 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00427867s
	[INFO] 10.244.0.5:42823 - 16529 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004583354s
	[INFO] 10.244.0.5:42202 - 29053 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004511794s
	[INFO] 10.244.0.5:41728 - 7327 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00465102s
	[INFO] 10.244.0.5:51144 - 44029 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004621313s
	[INFO] 10.244.0.5:55567 - 53859 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004510194s
	[INFO] 10.244.0.5:48896 - 39340 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004752751s
	[INFO] 10.244.0.5:41728 - 1262 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004308718s
	[INFO] 10.244.0.5:42823 - 44190 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004384611s
	[INFO] 10.244.0.5:55567 - 61011 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004335153s
	[INFO] 10.244.0.5:39366 - 65354 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004671554s
	[INFO] 10.244.0.5:48896 - 9261 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004157809s
	[INFO] 10.244.0.5:47943 - 12068 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004656791s
	[INFO] 10.244.0.5:41728 - 40384 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058205s
	[INFO] 10.244.0.5:42823 - 37607 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005573s
	[INFO] 10.244.0.5:51144 - 46172 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004717641s
	[INFO] 10.244.0.5:48896 - 38452 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000185674s
	[INFO] 10.244.0.5:42202 - 15836 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00470241s
	[INFO] 10.244.0.5:55567 - 32592 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000186327s
	[INFO] 10.244.0.5:51144 - 29912 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061552s
	[INFO] 10.244.0.5:39366 - 65321 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000266261s
	[INFO] 10.244.0.5:47943 - 42216 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000242185s
	[INFO] 10.244.0.5:42202 - 8574 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006011s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-865640
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-865640
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=ingress-addon-legacy-865640
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T09_36_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 09:36:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-865640
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 09:40:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 09:40:16 +0000   Mon, 15 Jan 2024 09:36:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 09:40:16 +0000   Mon, 15 Jan 2024 09:36:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 09:40:16 +0000   Mon, 15 Jan 2024 09:36:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 09:40:16 +0000   Mon, 15 Jan 2024 09:37:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-865640
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 533584600bbf45a9916df8aec267cc09
	  System UUID:                c229c006-0e6f-4ecc-9545-50a7a977a61e
	  Boot ID:                    cfbd0cf6-9096-4b85-b302-a1df984ff6e8
	  Kernel Version:             5.15.0-1048-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-h22rg                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-f8dgp                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m23s
	  kube-system                 etcd-ingress-addon-legacy-865640                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kindnet-4fklw                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m24s
	  kube-system                 kube-apiserver-ingress-addon-legacy-865640             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-865640    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-proxy-mxbgl                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kube-scheduler-ingress-addon-legacy-865640             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m38s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m38s  kubelet     Node ingress-addon-legacy-865640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m38s  kubelet     Node ingress-addon-legacy-865640 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m38s  kubelet     Node ingress-addon-legacy-865640 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m22s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m18s  kubelet     Node ingress-addon-legacy-865640 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.004923] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006597] FS-Cache: N-cookie d=000000004a606ad2{9p.inode} n=000000008a9152b2
	[  +0.008754] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.308298] FS-Cache: Duplicate cookie detected
	[  +0.004676] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000004a606ad2{9p.inode} n=000000009b2895ef
	[  +0.007370] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.008060] FS-Cache: N-cookie d=000000004a606ad2{9p.inode} n=000000007fcb8ee9
	[  +0.008754] FS-Cache: N-key=[8] '0690130200000000'
	[ +24.537057] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan15 09:37] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[  +1.024160] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[  +2.015838] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[  +4.255683] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[Jan15 09:38] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[ +16.122906] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[ +33.277743] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	
	
	==> etcd [1c23921ad672069bb17196f1a2863c34f785b3fb2e28e235d2760efd761a69a0] <==
	raft2024/01/15 09:36:39 INFO: aec36adc501070cc became follower at term 0
	raft2024/01/15 09:36:39 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/15 09:36:39 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/15 09:36:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-15 09:36:39.829385 W | auth: simple token is not cryptographically signed
	2024-01-15 09:36:39.832825 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-15 09:36:39.833636 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/15 09:36:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-15 09:36:39.834112 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-15 09:36:39.838105 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-15 09:36:39.838203 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-15 09:36:39.838299 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/15 09:36:40 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/15 09:36:40 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/15 09:36:40 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/15 09:36:40 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/15 09:36:40 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-15 09:36:40.566325 I | etcdserver: published {Name:ingress-addon-legacy-865640 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-15 09:36:40.566351 I | embed: ready to serve client requests
	2024-01-15 09:36:40.566364 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-15 09:36:40.566512 I | embed: ready to serve client requests
	2024-01-15 09:36:40.567321 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-15 09:36:40.567384 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-15 09:36:40.567703 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-15 09:36:40.567761 I | embed: serving client requests on 192.168.49.2:2379
	
	
	==> kernel <==
	 09:40:24 up 22 min,  0 users,  load average: 0.23, 0.66, 0.49
	Linux ingress-addon-legacy-865640 5.15.0-1048-gcp #56~20.04.1-Ubuntu SMP Fri Nov 24 16:52:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [067ab99fbbae8e726838a691d8a6413848c86e3d879cbd22855cc8053ba45610] <==
	I0115 09:38:24.273584       1 main.go:227] handling current node
	I0115 09:38:34.277262       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:38:34.277287       1 main.go:227] handling current node
	I0115 09:38:44.288793       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:38:44.288819       1 main.go:227] handling current node
	I0115 09:38:54.292148       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:38:54.292172       1 main.go:227] handling current node
	I0115 09:39:04.295551       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:39:04.295576       1 main.go:227] handling current node
	I0115 09:39:14.298647       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:39:14.298727       1 main.go:227] handling current node
	I0115 09:39:24.302342       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:39:24.302369       1 main.go:227] handling current node
	I0115 09:39:34.312179       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:39:34.312204       1 main.go:227] handling current node
	I0115 09:39:44.324347       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:39:44.324375       1 main.go:227] handling current node
	I0115 09:39:54.328889       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:39:54.328913       1 main.go:227] handling current node
	I0115 09:40:04.332302       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:40:04.332329       1 main.go:227] handling current node
	I0115 09:40:14.340272       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:40:14.340297       1 main.go:227] handling current node
	I0115 09:40:24.352385       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 09:40:24.352421       1 main.go:227] handling current node
	
	
	==> kube-apiserver [4f6e0c6d2cdeaf587ce2ba31ec1f58cc5814cd1136a756d60d68cafb045b7442] <==
	I0115 09:36:43.275714       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	E0115 09:36:43.275767       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0115 09:36:43.374481       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0115 09:36:43.374490       1 cache.go:39] Caches are synced for autoregister controller
	I0115 09:36:43.374597       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0115 09:36:43.376059       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0115 09:36:43.377487       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0115 09:36:44.273495       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0115 09:36:44.273521       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0115 09:36:44.278774       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0115 09:36:44.281480       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0115 09:36:44.281503       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0115 09:36:44.570138       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0115 09:36:44.599292       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0115 09:36:44.660062       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0115 09:36:44.661171       1 controller.go:609] quota admission added evaluator for: endpoints
	I0115 09:36:44.664285       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0115 09:36:45.551082       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0115 09:36:46.022186       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0115 09:36:46.190040       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0115 09:36:46.370097       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0115 09:37:00.971686       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0115 09:37:01.528211       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0115 09:37:21.198695       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0115 09:37:39.329776       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [cb71f3718b0e55cc86303e8cdc03d555160e8055cb05d747b6615e115ce3153e] <==
	I0115 09:37:01.299710       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0115 09:37:01.399538       1 shared_informer.go:230] Caches are synced for attach detach 
	I0115 09:37:01.428831       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0115 09:37:01.499126       1 shared_informer.go:230] Caches are synced for deployment 
	I0115 09:37:01.525256       1 shared_informer.go:230] Caches are synced for HPA 
	I0115 09:37:01.531523       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"7f8d5f50-6c74-484a-8a00-c80aaece8410", APIVersion:"apps/v1", ResourceVersion:"346", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0115 09:37:01.538419       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"9b6006b8-8115-44a3-b150-f0a8def970ce", APIVersion:"apps/v1", ResourceVersion:"347", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-f8dgp
	I0115 09:37:01.551534       1 shared_informer.go:230] Caches are synced for resource quota 
	I0115 09:37:01.604226       1 shared_informer.go:230] Caches are synced for resource quota 
	I0115 09:37:01.625240       1 shared_informer.go:230] Caches are synced for disruption 
	I0115 09:37:01.625264       1 disruption.go:339] Sending events to api server.
	I0115 09:37:01.625307       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0115 09:37:01.625329       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0115 09:37:01.725505       1 request.go:621] Throttling request took 1.008278871s, request: GET:https://control-plane.minikube.internal:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	I0115 09:37:02.234756       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0115 09:37:02.234809       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0115 09:37:10.983724       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0115 09:37:21.193396       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9f4f3bc0-14ba-4eb8-b988-cc735586c772", APIVersion:"apps/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0115 09:37:21.230141       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"649be658-9b5b-411e-b0ab-26afb98d89e5", APIVersion:"apps/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-2skkf
	I0115 09:37:21.230975       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d112474b-6c9d-4479-ae7c-ef99b836a2ed", APIVersion:"batch/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-28f56
	I0115 09:37:21.247061       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5befdd73-4880-48a4-8a3d-c65cb487f6fc", APIVersion:"batch/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-c5qdg
	I0115 09:37:23.541528       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5befdd73-4880-48a4-8a3d-c65cb487f6fc", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0115 09:37:23.549513       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d112474b-6c9d-4479-ae7c-ef99b836a2ed", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0115 09:39:59.383418       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"35a6ff72-46aa-4a35-8a26-77d6eb4e1555", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0115 09:39:59.388544       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"0a715531-7e79-48d2-bc2b-86df70e34d69", APIVersion:"apps/v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-h22rg
	
	
	==> kube-proxy [40c3bac44abdbcc618dbc2f6098954b04c872b3c80b207ef2d085f82739c4737] <==
	W0115 09:37:02.132078       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0115 09:37:02.139017       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0115 09:37:02.139053       1 server_others.go:186] Using iptables Proxier.
	I0115 09:37:02.139353       1 server.go:583] Version: v1.18.20
	I0115 09:37:02.139903       1 config.go:315] Starting service config controller
	I0115 09:37:02.139923       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0115 09:37:02.140158       1 config.go:133] Starting endpoints config controller
	I0115 09:37:02.140230       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0115 09:37:02.325458       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0115 09:37:02.326700       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [0115ed241411ba2397fb194d2dfde99b7a6b4b45542a35d539807a14977f7ca5] <==
	I0115 09:36:43.341538       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 09:36:43.341903       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0115 09:36:43.341977       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0115 09:36:43.342919       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 09:36:43.343101       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 09:36:43.343697       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 09:36:43.344169       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 09:36:43.344337       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 09:36:43.344472       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 09:36:43.344819       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 09:36:43.344843       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 09:36:43.344906       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 09:36:43.344849       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 09:36:43.344916       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 09:36:43.344963       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 09:36:44.170900       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 09:36:44.223050       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 09:36:44.224270       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 09:36:44.288845       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 09:36:44.308830       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 09:36:44.331958       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 09:36:44.359170       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 09:36:44.381378       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0115 09:36:46.842229       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0115 09:37:02.333880       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	
	==> kubelet <==
	Jan 15 09:39:41 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:39:41.382561    1878 pod_workers.go:191] Error syncing pod aaddf675-99bd-49dc-b834-3699da56b6a4 ("kube-ingress-dns-minikube_kube-system(aaddf675-99bd-49dc-b834-3699da56b6a4)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 15 09:39:53 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:39:53.382544    1878 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 15 09:39:53 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:39:53.382589    1878 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 15 09:39:53 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:39:53.382632    1878 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 15 09:39:53 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:39:53.382661    1878 pod_workers.go:191] Error syncing pod aaddf675-99bd-49dc-b834-3699da56b6a4 ("kube-ingress-dns-minikube_kube-system(aaddf675-99bd-49dc-b834-3699da56b6a4)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 15 09:39:59 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:39:59.394522    1878 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 15 09:39:59 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:39:59.573414    1878 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-plqcn" (UniqueName: "kubernetes.io/secret/45f875c3-562d-4fcc-a978-d22b7ac0ed04-default-token-plqcn") pod "hello-world-app-5f5d8b66bb-h22rg" (UID: "45f875c3-562d-4fcc-a978-d22b7ac0ed04")
	Jan 15 09:39:59 ingress-addon-legacy-865640 kubelet[1878]: W0115 09:39:59.741691    1878 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/43c5ddc1c8d004d1e3ba6541c7c47c19ea2e6e129ea9543f39038e35887d5dcc/crio-2ac3f86cd88be1f74962787ef59cf124bdaa689c7fc6cb5424fd33879bb5e4df WatchSource:0}: Error finding container 2ac3f86cd88be1f74962787ef59cf124bdaa689c7fc6cb5424fd33879bb5e4df: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0009fe0a0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Jan 15 09:40:07 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:40:07.382478    1878 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 15 09:40:07 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:40:07.382519    1878 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 15 09:40:07 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:40:07.382566    1878 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 15 09:40:07 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:40:07.382596    1878 pod_workers.go:191] Error syncing pod aaddf675-99bd-49dc-b834-3699da56b6a4 ("kube-ingress-dns-minikube_kube-system(aaddf675-99bd-49dc-b834-3699da56b6a4)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 15 09:40:15 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:40:15.262375    1878 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-h4kxg" (UniqueName: "kubernetes.io/secret/aaddf675-99bd-49dc-b834-3699da56b6a4-minikube-ingress-dns-token-h4kxg") pod "aaddf675-99bd-49dc-b834-3699da56b6a4" (UID: "aaddf675-99bd-49dc-b834-3699da56b6a4")
	Jan 15 09:40:15 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:40:15.264447    1878 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaddf675-99bd-49dc-b834-3699da56b6a4-minikube-ingress-dns-token-h4kxg" (OuterVolumeSpecName: "minikube-ingress-dns-token-h4kxg") pod "aaddf675-99bd-49dc-b834-3699da56b6a4" (UID: "aaddf675-99bd-49dc-b834-3699da56b6a4"). InnerVolumeSpecName "minikube-ingress-dns-token-h4kxg". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 09:40:15 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:40:15.362643    1878 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-h4kxg" (UniqueName: "kubernetes.io/secret/aaddf675-99bd-49dc-b834-3699da56b6a4-minikube-ingress-dns-token-h4kxg") on node "ingress-addon-legacy-865640" DevicePath ""
	Jan 15 09:40:16 ingress-addon-legacy-865640 kubelet[1878]: W0115 09:40:16.793004    1878 pod_container_deletor.go:77] Container "6997c7e9abfa6e584a85aa2d9ed7e55b223567c3c76b2e523171a3c958ca69d8" not found in pod's containers
	Jan 15 09:40:17 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:40:17.170096    1878 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-2skkf.17aa7be174abb5a3", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-2skkf", UID:"65bb0b4e-19ee-4591-94d0-3703921f794f", APIVersion:"v1", ResourceVersion:"468", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-865640"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1615d404a09eba3, ext:211185778085, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1615d404a09eba3, ext:211185778085, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-2skkf.17aa7be174abb5a3" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 15 09:40:17 ingress-addon-legacy-865640 kubelet[1878]: E0115 09:40:17.172857    1878 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-2skkf.17aa7be174abb5a3", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-2skkf", UID:"65bb0b4e-19ee-4591-94d0-3703921f794f", APIVersion:"v1", ResourceVersion:"468", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-865640"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1615d404a09eba3, ext:211185778085, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1615d404a1f91d1, ext:211187196872, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-2skkf.17aa7be174abb5a3" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 15 09:40:19 ingress-addon-legacy-865640 kubelet[1878]: W0115 09:40:19.798708    1878 pod_container_deletor.go:77] Container "3e3d3348301f06cbef5b76b491640200c2e4fd398c8d3d380888dd033183efeb" not found in pod's containers
	Jan 15 09:40:21 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:40:21.336504    1878 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/65bb0b4e-19ee-4591-94d0-3703921f794f-webhook-cert") pod "65bb0b4e-19ee-4591-94d0-3703921f794f" (UID: "65bb0b4e-19ee-4591-94d0-3703921f794f")
	Jan 15 09:40:21 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:40:21.336582    1878 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-6dbtm" (UniqueName: "kubernetes.io/secret/65bb0b4e-19ee-4591-94d0-3703921f794f-ingress-nginx-token-6dbtm") pod "65bb0b4e-19ee-4591-94d0-3703921f794f" (UID: "65bb0b4e-19ee-4591-94d0-3703921f794f")
	Jan 15 09:40:21 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:40:21.338604    1878 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bb0b4e-19ee-4591-94d0-3703921f794f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "65bb0b4e-19ee-4591-94d0-3703921f794f" (UID: "65bb0b4e-19ee-4591-94d0-3703921f794f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 09:40:21 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:40:21.338672    1878 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bb0b4e-19ee-4591-94d0-3703921f794f-ingress-nginx-token-6dbtm" (OuterVolumeSpecName: "ingress-nginx-token-6dbtm") pod "65bb0b4e-19ee-4591-94d0-3703921f794f" (UID: "65bb0b4e-19ee-4591-94d0-3703921f794f"). InnerVolumeSpecName "ingress-nginx-token-6dbtm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 09:40:21 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:40:21.436898    1878 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/65bb0b4e-19ee-4591-94d0-3703921f794f-webhook-cert") on node "ingress-addon-legacy-865640" DevicePath ""
	Jan 15 09:40:21 ingress-addon-legacy-865640 kubelet[1878]: I0115 09:40:21.436941    1878 reconciler.go:319] Volume detached for volume "ingress-nginx-token-6dbtm" (UniqueName: "kubernetes.io/secret/65bb0b4e-19ee-4591-94d0-3703921f794f-ingress-nginx-token-6dbtm") on node "ingress-addon-legacy-865640" DevicePath ""
	
	
	==> storage-provisioner [ead6b1b91e3650b7f29636b781b6b2ce2f5429234540698ac12961a42caca377] <==
	I0115 09:37:09.761160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 09:37:09.769241       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 09:37:09.769295       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 09:37:09.775237       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 09:37:09.775339       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"63b1bbc9-819a-4b3d-8b90-a3750e33850c", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-865640_a50039b5-ec2a-43ae-b6a2-78f0eb7f1bea became leader
	I0115 09:37:09.775410       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-865640_a50039b5-ec2a-43ae-b6a2-78f0eb7f1bea!
	I0115 09:37:09.876395       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-865640_a50039b5-ec2a-43ae-b6a2-78f0eb7f1bea!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-865640 -n ingress-addon-legacy-865640
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-865640 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (176.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-cplh9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-cplh9 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-cplh9 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (156.703245ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-cplh9): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-djgvv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-djgvv -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-djgvv -- sh -c "ping -c 1 192.168.58.1": exit status 1 (161.168211ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-djgvv): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-218062
helpers_test.go:235: (dbg) docker inspect multinode-218062:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84",
	        "Created": "2024-01-15T09:45:21.851552164Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 95524,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T09:45:22.136495954Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/hostname",
	        "HostsPath": "/var/lib/docker/containers/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/hosts",
	        "LogPath": "/var/lib/docker/containers/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84-json.log",
	        "Name": "/multinode-218062",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-218062:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-218062",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c6851f8235241387268d5695685612349ba037e5312e582250e230edbf3dcf1a-init/diff:/var/lib/docker/overlay2/d9ef098e29db67903afbff93fb25a8f837156cdbfdd0e74ced52d24f8de7a26c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c6851f8235241387268d5695685612349ba037e5312e582250e230edbf3dcf1a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c6851f8235241387268d5695685612349ba037e5312e582250e230edbf3dcf1a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c6851f8235241387268d5695685612349ba037e5312e582250e230edbf3dcf1a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-218062",
	                "Source": "/var/lib/docker/volumes/multinode-218062/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-218062",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-218062",
	                "name.minikube.sigs.k8s.io": "multinode-218062",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d510cc5292ceafe0fe60b7afe989d32d1d737642c22a4140fe5f2bde9eb56a6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1d510cc5292c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-218062": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "895276697ddf",
	                        "multinode-218062"
	                    ],
	                    "NetworkID": "dd324d5abab793ee5aaa12747697e7038ad320903cc1798e58c7a03ade176357",
	                    "EndpointID": "4c4bbd6de457b249ba9c5435c6744dbbfdbee1fe32a1357a1e88443028672563",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-218062 -n multinode-218062
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-218062 logs -n 25: (1.158595127s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-483882                           | mount-start-2-483882 | jenkins | v1.32.0 | 15 Jan 24 09:44 UTC | 15 Jan 24 09:45 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-483882 ssh -- ls                    | mount-start-2-483882 | jenkins | v1.32.0 | 15 Jan 24 09:45 UTC | 15 Jan 24 09:45 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-470166                           | mount-start-1-470166 | jenkins | v1.32.0 | 15 Jan 24 09:45 UTC | 15 Jan 24 09:45 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-483882 ssh -- ls                    | mount-start-2-483882 | jenkins | v1.32.0 | 15 Jan 24 09:45 UTC | 15 Jan 24 09:45 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-483882                           | mount-start-2-483882 | jenkins | v1.32.0 | 15 Jan 24 09:45 UTC | 15 Jan 24 09:45 UTC |
	| start   | -p mount-start-2-483882                           | mount-start-2-483882 | jenkins | v1.32.0 | 15 Jan 24 09:45 UTC | 15 Jan 24 09:45 UTC |
	| ssh     | mount-start-2-483882 ssh -- ls                    | mount-start-2-483882 | jenkins | v1.32.0 | 15 Jan 24 09:45 UTC | 15 Jan 24 09:45 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-483882                           | mount-start-2-483882 | jenkins | v1.32.0 | 15 Jan 24 09:45 UTC | 15 Jan 24 09:45 UTC |
	| delete  | -p mount-start-1-470166                           | mount-start-1-470166 | jenkins | v1.32.0 | 15 Jan 24 09:45 UTC | 15 Jan 24 09:45 UTC |
	| start   | -p multinode-218062                               | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:45 UTC | 15 Jan 24 09:46 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- apply -f                   | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- rollout                    | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- get pods -o                | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- get pods -o                | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- exec                       | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | busybox-5bc68d56bd-cplh9 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- exec                       | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | busybox-5bc68d56bd-djgvv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- exec                       | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | busybox-5bc68d56bd-cplh9 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- exec                       | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | busybox-5bc68d56bd-djgvv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- exec                       | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | busybox-5bc68d56bd-cplh9 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- exec                       | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | busybox-5bc68d56bd-djgvv -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- get pods -o                | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- exec                       | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | busybox-5bc68d56bd-cplh9                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- exec                       | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC |                     |
	|         | busybox-5bc68d56bd-cplh9 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- exec                       | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC | 15 Jan 24 09:46 UTC |
	|         | busybox-5bc68d56bd-djgvv                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-218062 -- exec                       | multinode-218062     | jenkins | v1.32.0 | 15 Jan 24 09:46 UTC |                     |
	|         | busybox-5bc68d56bd-djgvv -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:45:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:45:15.934163   94931 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:45:15.934306   94931 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:45:15.934318   94931 out.go:309] Setting ErrFile to fd 2...
	I0115 09:45:15.934323   94931 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:45:15.934521   94931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	I0115 09:45:15.935141   94931 out.go:303] Setting JSON to false
	I0115 09:45:15.936011   94931 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1666,"bootTime":1705310250,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:45:15.936080   94931 start.go:138] virtualization: kvm guest
	I0115 09:45:15.938532   94931 out.go:177] * [multinode-218062] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:45:15.939916   94931 notify.go:220] Checking for updates...
	I0115 09:45:15.941309   94931 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:45:15.942742   94931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:45:15.944138   94931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:45:15.945448   94931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	I0115 09:45:15.946749   94931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:45:15.948041   94931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:45:15.951693   94931 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:45:15.972679   94931 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 09:45:15.972792   94931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:45:16.029214   94931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-15 09:45:16.020936629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:45:16.029315   94931 docker.go:295] overlay module found
	I0115 09:45:16.031350   94931 out.go:177] * Using the docker driver based on user configuration
	I0115 09:45:16.033928   94931 start.go:298] selected driver: docker
	I0115 09:45:16.033941   94931 start.go:902] validating driver "docker" against <nil>
	I0115 09:45:16.033952   94931 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:45:16.034757   94931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:45:16.086261   94931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-15 09:45:16.077779141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:45:16.086408   94931 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:45:16.086619   94931 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 09:45:16.088687   94931 out.go:177] * Using Docker driver with root privileges
	I0115 09:45:16.090218   94931 cni.go:84] Creating CNI manager for ""
	I0115 09:45:16.090239   94931 cni.go:136] 0 nodes found, recommending kindnet
	I0115 09:45:16.090248   94931 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 09:45:16.090263   94931 start_flags.go:321] config:
	{Name:multinode-218062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-218062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:45:16.091915   94931 out.go:177] * Starting control plane node multinode-218062 in cluster multinode-218062
	I0115 09:45:16.093361   94931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 09:45:16.095084   94931 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 09:45:16.096505   94931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:45:16.096542   94931 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0115 09:45:16.096557   94931 cache.go:56] Caching tarball of preloaded images
	I0115 09:45:16.096559   94931 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 09:45:16.096658   94931 preload.go:174] Found /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 09:45:16.096671   94931 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 09:45:16.097059   94931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/config.json ...
	I0115 09:45:16.097083   94931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/config.json: {Name:mkfcada63cd30e39cdcbdb2501061c1ce339e8aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:45:16.112353   94931 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 09:45:16.112377   94931 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 09:45:16.112395   94931 cache.go:194] Successfully downloaded all kic artifacts
	I0115 09:45:16.112430   94931 start.go:365] acquiring machines lock for multinode-218062: {Name:mka72df0692cca76d291b6c4ef8db004e2feb4f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:45:16.112543   94931 start.go:369] acquired machines lock for "multinode-218062" in 89.532µs
	I0115 09:45:16.112652   94931 start.go:93] Provisioning new machine with config: &{Name:multinode-218062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-218062 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:45:16.112776   94931 start.go:125] createHost starting for "" (driver="docker")
	I0115 09:45:16.115255   94931 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 09:45:16.115790   94931 start.go:159] libmachine.API.Create for "multinode-218062" (driver="docker")
	I0115 09:45:16.115879   94931 client.go:168] LocalClient.Create starting
	I0115 09:45:16.115955   94931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem
	I0115 09:45:16.115996   94931 main.go:141] libmachine: Decoding PEM data...
	I0115 09:45:16.116016   94931 main.go:141] libmachine: Parsing certificate...
	I0115 09:45:16.116084   94931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem
	I0115 09:45:16.116105   94931 main.go:141] libmachine: Decoding PEM data...
	I0115 09:45:16.116125   94931 main.go:141] libmachine: Parsing certificate...
	I0115 09:45:16.116797   94931 cli_runner.go:164] Run: docker network inspect multinode-218062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 09:45:16.132588   94931 cli_runner.go:211] docker network inspect multinode-218062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 09:45:16.132666   94931 network_create.go:281] running [docker network inspect multinode-218062] to gather additional debugging logs...
	I0115 09:45:16.132687   94931 cli_runner.go:164] Run: docker network inspect multinode-218062
	W0115 09:45:16.148234   94931 cli_runner.go:211] docker network inspect multinode-218062 returned with exit code 1
	I0115 09:45:16.148275   94931 network_create.go:284] error running [docker network inspect multinode-218062]: docker network inspect multinode-218062: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-218062 not found
	I0115 09:45:16.148291   94931 network_create.go:286] output of [docker network inspect multinode-218062]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-218062 not found
	
	** /stderr **
	I0115 09:45:16.148403   94931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 09:45:16.164805   94931 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f8f6ef0a0f1f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:84:0b:cc:c6} reservation:<nil>}
	I0115 09:45:16.165328   94931 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00281d3c0}
	I0115 09:45:16.165366   94931 network_create.go:124] attempt to create docker network multinode-218062 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0115 09:45:16.165429   94931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-218062 multinode-218062
	I0115 09:45:16.219300   94931 network_create.go:108] docker network multinode-218062 192.168.58.0/24 created
	I0115 09:45:16.219336   94931 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-218062" container
	I0115 09:45:16.219407   94931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 09:45:16.233935   94931 cli_runner.go:164] Run: docker volume create multinode-218062 --label name.minikube.sigs.k8s.io=multinode-218062 --label created_by.minikube.sigs.k8s.io=true
	I0115 09:45:16.250563   94931 oci.go:103] Successfully created a docker volume multinode-218062
	I0115 09:45:16.250640   94931 cli_runner.go:164] Run: docker run --rm --name multinode-218062-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-218062 --entrypoint /usr/bin/test -v multinode-218062:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 09:45:16.732420   94931 oci.go:107] Successfully prepared a docker volume multinode-218062
	I0115 09:45:16.732462   94931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:45:16.732487   94931 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 09:45:16.732568   94931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-218062:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 09:45:21.783356   94931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-218062:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.050739923s)
	I0115 09:45:21.783388   94931 kic.go:203] duration metric: took 5.050901 seconds to extract preloaded images to volume
	W0115 09:45:21.783512   94931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 09:45:21.783614   94931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 09:45:21.835790   94931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-218062 --name multinode-218062 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-218062 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-218062 --network multinode-218062 --ip 192.168.58.2 --volume multinode-218062:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 09:45:22.144844   94931 cli_runner.go:164] Run: docker container inspect multinode-218062 --format={{.State.Running}}
	I0115 09:45:22.163698   94931 cli_runner.go:164] Run: docker container inspect multinode-218062 --format={{.State.Status}}
	I0115 09:45:22.182143   94931 cli_runner.go:164] Run: docker exec multinode-218062 stat /var/lib/dpkg/alternatives/iptables
	I0115 09:45:22.225320   94931 oci.go:144] the created container "multinode-218062" has a running status.
	I0115 09:45:22.225360   94931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa...
	I0115 09:45:22.276685   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0115 09:45:22.276731   94931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 09:45:22.296971   94931 cli_runner.go:164] Run: docker container inspect multinode-218062 --format={{.State.Status}}
	I0115 09:45:22.315502   94931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 09:45:22.315531   94931 kic_runner.go:114] Args: [docker exec --privileged multinode-218062 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 09:45:22.355299   94931 cli_runner.go:164] Run: docker container inspect multinode-218062 --format={{.State.Status}}
	I0115 09:45:22.374250   94931 machine.go:88] provisioning docker machine ...
	I0115 09:45:22.374290   94931 ubuntu.go:169] provisioning hostname "multinode-218062"
	I0115 09:45:22.374356   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:45:22.393493   94931 main.go:141] libmachine: Using SSH client type: native
	I0115 09:45:22.394012   94931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0115 09:45:22.394037   94931 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-218062 && echo "multinode-218062" | sudo tee /etc/hostname
	I0115 09:45:22.394702   94931 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50548->127.0.0.1:32847: read: connection reset by peer
	I0115 09:45:25.539066   94931 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-218062
	
	I0115 09:45:25.539163   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:45:25.555628   94931 main.go:141] libmachine: Using SSH client type: native
	I0115 09:45:25.555948   94931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0115 09:45:25.555965   94931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-218062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-218062/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-218062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 09:45:25.689252   94931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:45:25.689286   94931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17953-3696/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-3696/.minikube}
	I0115 09:45:25.689312   94931 ubuntu.go:177] setting up certificates
	I0115 09:45:25.689325   94931 provision.go:83] configureAuth start
	I0115 09:45:25.689384   94931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-218062
	I0115 09:45:25.705197   94931 provision.go:138] copyHostCerts
	I0115 09:45:25.705233   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem
	I0115 09:45:25.705260   94931 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem, removing ...
	I0115 09:45:25.705269   94931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem
	I0115 09:45:25.705336   94931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem (1082 bytes)
	I0115 09:45:25.705409   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem
	I0115 09:45:25.705425   94931 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem, removing ...
	I0115 09:45:25.705432   94931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem
	I0115 09:45:25.705454   94931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem (1123 bytes)
	I0115 09:45:25.705494   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem
	I0115 09:45:25.705538   94931 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem, removing ...
	I0115 09:45:25.705548   94931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem
	I0115 09:45:25.705572   94931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem (1679 bytes)
	I0115 09:45:25.705622   94931 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem org=jenkins.multinode-218062 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-218062]
	I0115 09:45:25.843601   94931 provision.go:172] copyRemoteCerts
	I0115 09:45:25.843667   94931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 09:45:25.843701   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:45:25.860544   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa Username:docker}
	I0115 09:45:25.953366   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 09:45:25.953429   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 09:45:25.974391   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 09:45:25.974451   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 09:45:25.995352   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 09:45:25.995415   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0115 09:45:26.020344   94931 provision.go:86] duration metric: configureAuth took 331.003236ms
	I0115 09:45:26.020375   94931 ubuntu.go:193] setting minikube options for container-runtime
	I0115 09:45:26.020575   94931 config.go:182] Loaded profile config "multinode-218062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:45:26.020690   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:45:26.037427   94931 main.go:141] libmachine: Using SSH client type: native
	I0115 09:45:26.037891   94931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0115 09:45:26.037919   94931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 09:45:26.254863   94931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 09:45:26.254891   94931 machine.go:91] provisioned docker machine in 3.880615607s
	I0115 09:45:26.254900   94931 client.go:171] LocalClient.Create took 10.139013722s
	I0115 09:45:26.254917   94931 start.go:167] duration metric: libmachine.API.Create for "multinode-218062" took 10.139131279s
	I0115 09:45:26.254924   94931 start.go:300] post-start starting for "multinode-218062" (driver="docker")
	I0115 09:45:26.254935   94931 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 09:45:26.254982   94931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 09:45:26.255020   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:45:26.271813   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa Username:docker}
	I0115 09:45:26.365668   94931 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 09:45:26.368466   94931 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0115 09:45:26.368487   94931 command_runner.go:130] > NAME="Ubuntu"
	I0115 09:45:26.368496   94931 command_runner.go:130] > VERSION_ID="22.04"
	I0115 09:45:26.368504   94931 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0115 09:45:26.368520   94931 command_runner.go:130] > VERSION_CODENAME=jammy
	I0115 09:45:26.368527   94931 command_runner.go:130] > ID=ubuntu
	I0115 09:45:26.368535   94931 command_runner.go:130] > ID_LIKE=debian
	I0115 09:45:26.368543   94931 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0115 09:45:26.368551   94931 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0115 09:45:26.368576   94931 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0115 09:45:26.368585   94931 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0115 09:45:26.368589   94931 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0115 09:45:26.368634   94931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 09:45:26.368655   94931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 09:45:26.368663   94931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 09:45:26.368669   94931 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 09:45:26.368677   94931 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-3696/.minikube/addons for local assets ...
	I0115 09:45:26.368722   94931 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-3696/.minikube/files for local assets ...
	I0115 09:45:26.368797   94931 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem -> 118252.pem in /etc/ssl/certs
	I0115 09:45:26.368808   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem -> /etc/ssl/certs/118252.pem
	I0115 09:45:26.368884   94931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 09:45:26.376292   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem --> /etc/ssl/certs/118252.pem (1708 bytes)
	I0115 09:45:26.397626   94931 start.go:303] post-start completed in 142.687556ms
	I0115 09:45:26.397984   94931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-218062
	I0115 09:45:26.414078   94931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/config.json ...
	I0115 09:45:26.414300   94931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 09:45:26.414338   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:45:26.429687   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa Username:docker}
	I0115 09:45:26.522000   94931 command_runner.go:130] > 24%!
	(MISSING)I0115 09:45:26.522094   94931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 09:45:26.526315   94931 command_runner.go:130] > 223G
	I0115 09:45:26.526367   94931 start.go:128] duration metric: createHost completed in 10.413579182s
	I0115 09:45:26.526380   94931 start.go:83] releasing machines lock for "multinode-218062", held for 10.413757531s
	I0115 09:45:26.526448   94931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-218062
	I0115 09:45:26.543466   94931 ssh_runner.go:195] Run: cat /version.json
	I0115 09:45:26.543511   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:45:26.543583   94931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 09:45:26.543645   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:45:26.560752   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa Username:docker}
	I0115 09:45:26.562102   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa Username:docker}
	I0115 09:45:26.747848   94931 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0115 09:45:26.747916   94931 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1704759386-17866", "minikube_version": "v1.32.0", "commit": "3c45a4d018cdc90b01d9bcb479fb293aad58ed8f"}
	I0115 09:45:26.748022   94931 ssh_runner.go:195] Run: systemctl --version
	I0115 09:45:26.752163   94931 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0115 09:45:26.752217   94931 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0115 09:45:26.752295   94931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 09:45:26.888653   94931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 09:45:26.892419   94931 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0115 09:45:26.892445   94931 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0115 09:45:26.892453   94931 command_runner.go:130] > Device: 37h/55d	Inode: 552115      Links: 1
	I0115 09:45:26.892461   94931 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 09:45:26.892467   94931 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0115 09:45:26.892472   94931 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0115 09:45:26.892477   94931 command_runner.go:130] > Change: 2024-01-15 09:26:53.362832697 +0000
	I0115 09:45:26.892482   94931 command_runner.go:130] >  Birth: 2024-01-15 09:26:53.362832697 +0000
	I0115 09:45:26.892686   94931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:45:26.910332   94931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0115 09:45:26.910415   94931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:45:26.937321   94931 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0115 09:45:26.937381   94931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 09:45:26.937390   94931 start.go:475] detecting cgroup driver to use...
	I0115 09:45:26.937425   94931 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 09:45:26.937468   94931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 09:45:26.951915   94931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 09:45:26.962113   94931 docker.go:217] disabling cri-docker service (if available) ...
	I0115 09:45:26.962169   94931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 09:45:26.974352   94931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 09:45:26.987015   94931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 09:45:27.061811   94931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 09:45:27.137595   94931 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0115 09:45:27.137640   94931 docker.go:233] disabling docker service ...
	I0115 09:45:27.137678   94931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 09:45:27.154924   94931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 09:45:27.165578   94931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 09:45:27.237446   94931 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0115 09:45:27.237519   94931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 09:45:27.247703   94931 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0115 09:45:27.318943   94931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 09:45:27.329415   94931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 09:45:27.342869   94931 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0115 09:45:27.343742   94931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 09:45:27.343801   94931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:45:27.352469   94931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 09:45:27.352530   94931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:45:27.360932   94931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:45:27.368910   94931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:45:27.377152   94931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 09:45:27.384958   94931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 09:45:27.391753   94931 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0115 09:45:27.392452   94931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 09:45:27.399777   94931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 09:45:27.469043   94931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 09:45:27.595522   94931 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 09:45:27.595587   94931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 09:45:27.598907   94931 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0115 09:45:27.598935   94931 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0115 09:45:27.598949   94931 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0115 09:45:27.598957   94931 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 09:45:27.598962   94931 command_runner.go:130] > Access: 2024-01-15 09:45:27.579759051 +0000
	I0115 09:45:27.598968   94931 command_runner.go:130] > Modify: 2024-01-15 09:45:27.579759051 +0000
	I0115 09:45:27.598976   94931 command_runner.go:130] > Change: 2024-01-15 09:45:27.579759051 +0000
	I0115 09:45:27.598980   94931 command_runner.go:130] >  Birth: -
	I0115 09:45:27.599005   94931 start.go:543] Will wait 60s for crictl version
	I0115 09:45:27.599046   94931 ssh_runner.go:195] Run: which crictl
	I0115 09:45:27.602050   94931 command_runner.go:130] > /usr/bin/crictl
	I0115 09:45:27.602124   94931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 09:45:27.634279   94931 command_runner.go:130] > Version:  0.1.0
	I0115 09:45:27.634303   94931 command_runner.go:130] > RuntimeName:  cri-o
	I0115 09:45:27.634308   94931 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0115 09:45:27.634314   94931 command_runner.go:130] > RuntimeApiVersion:  v1
	I0115 09:45:27.634333   94931 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0115 09:45:27.634401   94931 ssh_runner.go:195] Run: crio --version
	I0115 09:45:27.667351   94931 command_runner.go:130] > crio version 1.24.6
	I0115 09:45:27.667377   94931 command_runner.go:130] > Version:          1.24.6
	I0115 09:45:27.667388   94931 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0115 09:45:27.667392   94931 command_runner.go:130] > GitTreeState:     clean
	I0115 09:45:27.667398   94931 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0115 09:45:27.667402   94931 command_runner.go:130] > GoVersion:        go1.18.2
	I0115 09:45:27.667406   94931 command_runner.go:130] > Compiler:         gc
	I0115 09:45:27.667413   94931 command_runner.go:130] > Platform:         linux/amd64
	I0115 09:45:27.667422   94931 command_runner.go:130] > Linkmode:         dynamic
	I0115 09:45:27.667429   94931 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 09:45:27.667436   94931 command_runner.go:130] > SeccompEnabled:   true
	I0115 09:45:27.667460   94931 command_runner.go:130] > AppArmorEnabled:  false
	I0115 09:45:27.667546   94931 ssh_runner.go:195] Run: crio --version
	I0115 09:45:27.700658   94931 command_runner.go:130] > crio version 1.24.6
	I0115 09:45:27.700686   94931 command_runner.go:130] > Version:          1.24.6
	I0115 09:45:27.700693   94931 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0115 09:45:27.700697   94931 command_runner.go:130] > GitTreeState:     clean
	I0115 09:45:27.700703   94931 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0115 09:45:27.700708   94931 command_runner.go:130] > GoVersion:        go1.18.2
	I0115 09:45:27.700712   94931 command_runner.go:130] > Compiler:         gc
	I0115 09:45:27.700717   94931 command_runner.go:130] > Platform:         linux/amd64
	I0115 09:45:27.700722   94931 command_runner.go:130] > Linkmode:         dynamic
	I0115 09:45:27.700730   94931 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 09:45:27.700734   94931 command_runner.go:130] > SeccompEnabled:   true
	I0115 09:45:27.700739   94931 command_runner.go:130] > AppArmorEnabled:  false
	I0115 09:45:27.704338   94931 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0115 09:45:27.705934   94931 cli_runner.go:164] Run: docker network inspect multinode-218062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 09:45:27.722162   94931 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0115 09:45:27.725643   94931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:45:27.735392   94931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:45:27.735441   94931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:45:27.787937   94931 command_runner.go:130] > {
	I0115 09:45:27.787959   94931 command_runner.go:130] >   "images": [
	I0115 09:45:27.787964   94931 command_runner.go:130] >     {
	I0115 09:45:27.787972   94931 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0115 09:45:27.787979   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.787996   94931 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0115 09:45:27.788007   94931 command_runner.go:130] >       ],
	I0115 09:45:27.788017   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.788036   94931 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0115 09:45:27.788123   94931 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0115 09:45:27.788152   94931 command_runner.go:130] >       ],
	I0115 09:45:27.788162   94931 command_runner.go:130] >       "size": "65258016",
	I0115 09:45:27.788170   94931 command_runner.go:130] >       "uid": null,
	I0115 09:45:27.788182   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.788207   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.788216   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.788221   94931 command_runner.go:130] >     },
	I0115 09:45:27.788227   94931 command_runner.go:130] >     {
	I0115 09:45:27.788233   94931 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0115 09:45:27.788240   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.788250   94931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0115 09:45:27.788261   94931 command_runner.go:130] >       ],
	I0115 09:45:27.788274   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.788294   94931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0115 09:45:27.788306   94931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0115 09:45:27.788313   94931 command_runner.go:130] >       ],
	I0115 09:45:27.788322   94931 command_runner.go:130] >       "size": "31470524",
	I0115 09:45:27.788329   94931 command_runner.go:130] >       "uid": null,
	I0115 09:45:27.788335   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.788347   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.788359   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.788366   94931 command_runner.go:130] >     },
	I0115 09:45:27.788378   94931 command_runner.go:130] >     {
	I0115 09:45:27.788393   94931 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0115 09:45:27.788404   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.788414   94931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0115 09:45:27.788423   94931 command_runner.go:130] >       ],
	I0115 09:45:27.788428   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.788436   94931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0115 09:45:27.788457   94931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0115 09:45:27.788468   94931 command_runner.go:130] >       ],
	I0115 09:45:27.788502   94931 command_runner.go:130] >       "size": "53621675",
	I0115 09:45:27.788514   94931 command_runner.go:130] >       "uid": null,
	I0115 09:45:27.788527   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.788538   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.788547   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.788554   94931 command_runner.go:130] >     },
	I0115 09:45:27.788564   94931 command_runner.go:130] >     {
	I0115 09:45:27.788576   94931 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0115 09:45:27.788587   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.788599   94931 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0115 09:45:27.788610   94931 command_runner.go:130] >       ],
	I0115 09:45:27.788622   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.788634   94931 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0115 09:45:27.788650   94931 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0115 09:45:27.788672   94931 command_runner.go:130] >       ],
	I0115 09:45:27.788684   94931 command_runner.go:130] >       "size": "295456551",
	I0115 09:45:27.788696   94931 command_runner.go:130] >       "uid": {
	I0115 09:45:27.788704   94931 command_runner.go:130] >         "value": "0"
	I0115 09:45:27.788718   94931 command_runner.go:130] >       },
	I0115 09:45:27.788730   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.788739   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.788746   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.788757   94931 command_runner.go:130] >     },
	I0115 09:45:27.788768   94931 command_runner.go:130] >     {
	I0115 09:45:27.788784   94931 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0115 09:45:27.788795   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.788808   94931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0115 09:45:27.788819   94931 command_runner.go:130] >       ],
	I0115 09:45:27.788827   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.788841   94931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0115 09:45:27.788858   94931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0115 09:45:27.788869   94931 command_runner.go:130] >       ],
	I0115 09:45:27.788881   94931 command_runner.go:130] >       "size": "127226832",
	I0115 09:45:27.788892   94931 command_runner.go:130] >       "uid": {
	I0115 09:45:27.788904   94931 command_runner.go:130] >         "value": "0"
	I0115 09:45:27.788920   94931 command_runner.go:130] >       },
	I0115 09:45:27.788933   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.788945   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.788958   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.788965   94931 command_runner.go:130] >     },
	I0115 09:45:27.788976   94931 command_runner.go:130] >     {
	I0115 09:45:27.788991   94931 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0115 09:45:27.789003   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.789016   94931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0115 09:45:27.789026   94931 command_runner.go:130] >       ],
	I0115 09:45:27.789035   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.789053   94931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0115 09:45:27.789070   94931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0115 09:45:27.789081   94931 command_runner.go:130] >       ],
	I0115 09:45:27.789093   94931 command_runner.go:130] >       "size": "123261750",
	I0115 09:45:27.789123   94931 command_runner.go:130] >       "uid": {
	I0115 09:45:27.789132   94931 command_runner.go:130] >         "value": "0"
	I0115 09:45:27.789139   94931 command_runner.go:130] >       },
	I0115 09:45:27.789146   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.789159   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.789172   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.789179   94931 command_runner.go:130] >     },
	I0115 09:45:27.789186   94931 command_runner.go:130] >     {
	I0115 09:45:27.789201   94931 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0115 09:45:27.789210   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.789223   94931 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0115 09:45:27.789235   94931 command_runner.go:130] >       ],
	I0115 09:45:27.789247   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.789263   94931 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0115 09:45:27.789280   94931 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0115 09:45:27.789290   94931 command_runner.go:130] >       ],
	I0115 09:45:27.789300   94931 command_runner.go:130] >       "size": "74749335",
	I0115 09:45:27.789310   94931 command_runner.go:130] >       "uid": null,
	I0115 09:45:27.789322   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.789333   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.789345   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.789356   94931 command_runner.go:130] >     },
	I0115 09:45:27.789371   94931 command_runner.go:130] >     {
	I0115 09:45:27.789385   94931 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0115 09:45:27.789395   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.789408   94931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0115 09:45:27.789420   94931 command_runner.go:130] >       ],
	I0115 09:45:27.789432   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.789470   94931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0115 09:45:27.789490   94931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0115 09:45:27.789496   94931 command_runner.go:130] >       ],
	I0115 09:45:27.789507   94931 command_runner.go:130] >       "size": "61551410",
	I0115 09:45:27.789519   94931 command_runner.go:130] >       "uid": {
	I0115 09:45:27.789527   94931 command_runner.go:130] >         "value": "0"
	I0115 09:45:27.789538   94931 command_runner.go:130] >       },
	I0115 09:45:27.789550   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.789562   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.789574   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.789584   94931 command_runner.go:130] >     },
	I0115 09:45:27.789592   94931 command_runner.go:130] >     {
	I0115 09:45:27.789606   94931 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0115 09:45:27.789619   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.789628   94931 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0115 09:45:27.789642   94931 command_runner.go:130] >       ],
	I0115 09:45:27.789654   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.789673   94931 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0115 09:45:27.789685   94931 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0115 09:45:27.789703   94931 command_runner.go:130] >       ],
	I0115 09:45:27.789712   94931 command_runner.go:130] >       "size": "750414",
	I0115 09:45:27.789723   94931 command_runner.go:130] >       "uid": {
	I0115 09:45:27.789732   94931 command_runner.go:130] >         "value": "65535"
	I0115 09:45:27.789743   94931 command_runner.go:130] >       },
	I0115 09:45:27.789755   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.789766   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.789777   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.789788   94931 command_runner.go:130] >     }
	I0115 09:45:27.789793   94931 command_runner.go:130] >   ]
	I0115 09:45:27.789798   94931 command_runner.go:130] > }
	I0115 09:45:27.790140   94931 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 09:45:27.790158   94931 crio.go:415] Images already preloaded, skipping extraction
	I0115 09:45:27.790198   94931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 09:45:27.819952   94931 command_runner.go:130] > {
	I0115 09:45:27.819973   94931 command_runner.go:130] >   "images": [
	I0115 09:45:27.819977   94931 command_runner.go:130] >     {
	I0115 09:45:27.819985   94931 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0115 09:45:27.819991   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.819997   94931 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0115 09:45:27.820000   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820004   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.820015   94931 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0115 09:45:27.820024   94931 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0115 09:45:27.820030   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820036   94931 command_runner.go:130] >       "size": "65258016",
	I0115 09:45:27.820043   94931 command_runner.go:130] >       "uid": null,
	I0115 09:45:27.820047   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.820056   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.820063   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.820068   94931 command_runner.go:130] >     },
	I0115 09:45:27.820077   94931 command_runner.go:130] >     {
	I0115 09:45:27.820086   94931 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0115 09:45:27.820093   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.820102   94931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0115 09:45:27.820109   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820116   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.820128   94931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0115 09:45:27.820141   94931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0115 09:45:27.820148   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820157   94931 command_runner.go:130] >       "size": "31470524",
	I0115 09:45:27.820167   94931 command_runner.go:130] >       "uid": null,
	I0115 09:45:27.820177   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.820184   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.820195   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.820204   94931 command_runner.go:130] >     },
	I0115 09:45:27.820218   94931 command_runner.go:130] >     {
	I0115 09:45:27.820244   94931 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0115 09:45:27.820253   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.820265   94931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0115 09:45:27.820275   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820282   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.820298   94931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0115 09:45:27.820314   94931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0115 09:45:27.820323   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820334   94931 command_runner.go:130] >       "size": "53621675",
	I0115 09:45:27.820343   94931 command_runner.go:130] >       "uid": null,
	I0115 09:45:27.820350   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.820355   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.820365   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.820372   94931 command_runner.go:130] >     },
	I0115 09:45:27.820382   94931 command_runner.go:130] >     {
	I0115 09:45:27.820395   94931 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0115 09:45:27.820405   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.820419   94931 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0115 09:45:27.820428   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820436   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.820449   94931 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0115 09:45:27.820465   94931 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0115 09:45:27.820483   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820494   94931 command_runner.go:130] >       "size": "295456551",
	I0115 09:45:27.820504   94931 command_runner.go:130] >       "uid": {
	I0115 09:45:27.820515   94931 command_runner.go:130] >         "value": "0"
	I0115 09:45:27.820524   94931 command_runner.go:130] >       },
	I0115 09:45:27.820531   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.820536   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.820545   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.820554   94931 command_runner.go:130] >     },
	I0115 09:45:27.820561   94931 command_runner.go:130] >     {
	I0115 09:45:27.820575   94931 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0115 09:45:27.820585   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.820597   94931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0115 09:45:27.820622   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820630   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.820641   94931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0115 09:45:27.820653   94931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0115 09:45:27.820658   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820665   94931 command_runner.go:130] >       "size": "127226832",
	I0115 09:45:27.820672   94931 command_runner.go:130] >       "uid": {
	I0115 09:45:27.820683   94931 command_runner.go:130] >         "value": "0"
	I0115 09:45:27.820689   94931 command_runner.go:130] >       },
	I0115 09:45:27.820695   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.820704   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.820714   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.820722   94931 command_runner.go:130] >     },
	I0115 09:45:27.820731   94931 command_runner.go:130] >     {
	I0115 09:45:27.820744   94931 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0115 09:45:27.820753   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.820764   94931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0115 09:45:27.820773   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820791   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.820807   94931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0115 09:45:27.820823   94931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0115 09:45:27.820831   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820841   94931 command_runner.go:130] >       "size": "123261750",
	I0115 09:45:27.820849   94931 command_runner.go:130] >       "uid": {
	I0115 09:45:27.820859   94931 command_runner.go:130] >         "value": "0"
	I0115 09:45:27.820868   94931 command_runner.go:130] >       },
	I0115 09:45:27.820877   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.820887   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.820896   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.820902   94931 command_runner.go:130] >     },
	I0115 09:45:27.820910   94931 command_runner.go:130] >     {
	I0115 09:45:27.820925   94931 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0115 09:45:27.820935   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.820947   94931 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0115 09:45:27.820957   94931 command_runner.go:130] >       ],
	I0115 09:45:27.820968   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.820989   94931 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0115 09:45:27.821003   94931 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0115 09:45:27.821011   94931 command_runner.go:130] >       ],
	I0115 09:45:27.821018   94931 command_runner.go:130] >       "size": "74749335",
	I0115 09:45:27.821027   94931 command_runner.go:130] >       "uid": null,
	I0115 09:45:27.821037   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.821046   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.821055   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.821064   94931 command_runner.go:130] >     },
	I0115 09:45:27.821072   94931 command_runner.go:130] >     {
	I0115 09:45:27.821091   94931 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0115 09:45:27.821145   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.821157   94931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0115 09:45:27.821165   94931 command_runner.go:130] >       ],
	I0115 09:45:27.821175   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.821208   94931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0115 09:45:27.821223   94931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0115 09:45:27.821232   94931 command_runner.go:130] >       ],
	I0115 09:45:27.821244   94931 command_runner.go:130] >       "size": "61551410",
	I0115 09:45:27.821253   94931 command_runner.go:130] >       "uid": {
	I0115 09:45:27.821261   94931 command_runner.go:130] >         "value": "0"
	I0115 09:45:27.821270   94931 command_runner.go:130] >       },
	I0115 09:45:27.821279   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.821288   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.821297   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.821305   94931 command_runner.go:130] >     },
	I0115 09:45:27.821311   94931 command_runner.go:130] >     {
	I0115 09:45:27.821323   94931 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0115 09:45:27.821332   94931 command_runner.go:130] >       "repoTags": [
	I0115 09:45:27.821342   94931 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0115 09:45:27.821351   94931 command_runner.go:130] >       ],
	I0115 09:45:27.821360   94931 command_runner.go:130] >       "repoDigests": [
	I0115 09:45:27.821373   94931 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0115 09:45:27.821387   94931 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0115 09:45:27.821396   94931 command_runner.go:130] >       ],
	I0115 09:45:27.821403   94931 command_runner.go:130] >       "size": "750414",
	I0115 09:45:27.821416   94931 command_runner.go:130] >       "uid": {
	I0115 09:45:27.821427   94931 command_runner.go:130] >         "value": "65535"
	I0115 09:45:27.821436   94931 command_runner.go:130] >       },
	I0115 09:45:27.821446   94931 command_runner.go:130] >       "username": "",
	I0115 09:45:27.821457   94931 command_runner.go:130] >       "spec": null,
	I0115 09:45:27.821467   94931 command_runner.go:130] >       "pinned": false
	I0115 09:45:27.821475   94931 command_runner.go:130] >     }
	I0115 09:45:27.821484   94931 command_runner.go:130] >   ]
	I0115 09:45:27.821490   94931 command_runner.go:130] > }
	I0115 09:45:27.822045   94931 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 09:45:27.822073   94931 cache_images.go:84] Images are preloaded, skipping loading
	I0115 09:45:27.822140   94931 ssh_runner.go:195] Run: crio config
	I0115 09:45:27.859329   94931 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0115 09:45:27.859361   94931 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0115 09:45:27.859372   94931 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0115 09:45:27.859377   94931 command_runner.go:130] > #
	I0115 09:45:27.859388   94931 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0115 09:45:27.859401   94931 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0115 09:45:27.859416   94931 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0115 09:45:27.859436   94931 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0115 09:45:27.859447   94931 command_runner.go:130] > # reload'.
	I0115 09:45:27.859460   94931 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0115 09:45:27.859474   94931 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0115 09:45:27.859488   94931 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0115 09:45:27.859501   94931 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0115 09:45:27.859508   94931 command_runner.go:130] > [crio]
	I0115 09:45:27.859522   94931 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0115 09:45:27.859535   94931 command_runner.go:130] > # containers images, in this directory.
	I0115 09:45:27.859556   94931 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0115 09:45:27.859573   94931 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0115 09:45:27.859583   94931 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0115 09:45:27.859597   94931 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0115 09:45:27.859610   94931 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0115 09:45:27.859619   94931 command_runner.go:130] > # storage_driver = "vfs"
	I0115 09:45:27.859632   94931 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0115 09:45:27.859650   94931 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0115 09:45:27.859661   94931 command_runner.go:130] > # storage_option = [
	I0115 09:45:27.859670   94931 command_runner.go:130] > # ]
	I0115 09:45:27.859682   94931 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0115 09:45:27.859696   94931 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0115 09:45:27.859716   94931 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0115 09:45:27.859728   94931 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0115 09:45:27.859739   94931 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0115 09:45:27.859749   94931 command_runner.go:130] > # always happen on a node reboot
	I0115 09:45:27.859762   94931 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0115 09:45:27.859776   94931 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0115 09:45:27.859789   94931 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0115 09:45:27.859810   94931 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0115 09:45:27.859822   94931 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0115 09:45:27.859836   94931 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0115 09:45:27.859852   94931 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0115 09:45:27.859863   94931 command_runner.go:130] > # internal_wipe = true
	I0115 09:45:27.859879   94931 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0115 09:45:27.859899   94931 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0115 09:45:27.859912   94931 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0115 09:45:27.859925   94931 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0115 09:45:27.859939   94931 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0115 09:45:27.859952   94931 command_runner.go:130] > [crio.api]
	I0115 09:45:27.859965   94931 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0115 09:45:27.859976   94931 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0115 09:45:27.859986   94931 command_runner.go:130] > # IP address on which the stream server will listen.
	I0115 09:45:27.859997   94931 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0115 09:45:27.860011   94931 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0115 09:45:27.860024   94931 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0115 09:45:27.860032   94931 command_runner.go:130] > # stream_port = "0"
	I0115 09:45:27.860044   94931 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0115 09:45:27.860055   94931 command_runner.go:130] > # stream_enable_tls = false
	I0115 09:45:27.860068   94931 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0115 09:45:27.860080   94931 command_runner.go:130] > # stream_idle_timeout = ""
	I0115 09:45:27.860091   94931 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0115 09:45:27.860105   94931 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0115 09:45:27.860118   94931 command_runner.go:130] > # minutes.
	I0115 09:45:27.860129   94931 command_runner.go:130] > # stream_tls_cert = ""
	I0115 09:45:27.860141   94931 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0115 09:45:27.860154   94931 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0115 09:45:27.860164   94931 command_runner.go:130] > # stream_tls_key = ""
	I0115 09:45:27.860175   94931 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0115 09:45:27.860189   94931 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0115 09:45:27.860201   94931 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0115 09:45:27.860210   94931 command_runner.go:130] > # stream_tls_ca = ""
	I0115 09:45:27.860223   94931 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 09:45:27.860235   94931 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0115 09:45:27.860251   94931 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 09:45:27.860267   94931 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0115 09:45:27.860301   94931 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0115 09:45:27.860316   94931 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0115 09:45:27.860322   94931 command_runner.go:130] > [crio.runtime]
	I0115 09:45:27.860333   94931 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0115 09:45:27.860343   94931 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0115 09:45:27.860353   94931 command_runner.go:130] > # "nofile=1024:2048"
	I0115 09:45:27.860364   94931 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0115 09:45:27.860376   94931 command_runner.go:130] > # default_ulimits = [
	I0115 09:45:27.860382   94931 command_runner.go:130] > # ]
	I0115 09:45:27.860391   94931 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0115 09:45:27.860400   94931 command_runner.go:130] > # no_pivot = false
	I0115 09:45:27.860412   94931 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0115 09:45:27.860421   94931 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0115 09:45:27.860430   94931 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0115 09:45:27.860444   94931 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0115 09:45:27.860453   94931 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0115 09:45:27.860464   94931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 09:45:27.860478   94931 command_runner.go:130] > # conmon = ""
	I0115 09:45:27.860486   94931 command_runner.go:130] > # Cgroup setting for conmon
	I0115 09:45:27.860497   94931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0115 09:45:27.860504   94931 command_runner.go:130] > conmon_cgroup = "pod"
	I0115 09:45:27.860513   94931 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0115 09:45:27.860525   94931 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0115 09:45:27.860540   94931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 09:45:27.860553   94931 command_runner.go:130] > # conmon_env = [
	I0115 09:45:27.860559   94931 command_runner.go:130] > # ]
	I0115 09:45:27.860568   94931 command_runner.go:130] > # Additional environment variables to set for all the
	I0115 09:45:27.860580   94931 command_runner.go:130] > # containers. These are overridden if set in the
	I0115 09:45:27.860590   94931 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0115 09:45:27.860600   94931 command_runner.go:130] > # default_env = [
	I0115 09:45:27.860607   94931 command_runner.go:130] > # ]
	I0115 09:45:27.860615   94931 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0115 09:45:27.860625   94931 command_runner.go:130] > # selinux = false
	I0115 09:45:27.860635   94931 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0115 09:45:27.860649   94931 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0115 09:45:27.860661   94931 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0115 09:45:27.860671   94931 command_runner.go:130] > # seccomp_profile = ""
	I0115 09:45:27.860683   94931 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0115 09:45:27.860695   94931 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0115 09:45:27.860710   94931 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0115 09:45:27.860721   94931 command_runner.go:130] > # which might increase security.
	I0115 09:45:27.860736   94931 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0115 09:45:27.860754   94931 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0115 09:45:27.860767   94931 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0115 09:45:27.860780   94931 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0115 09:45:27.860793   94931 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0115 09:45:27.860802   94931 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:45:27.860810   94931 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0115 09:45:27.860825   94931 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0115 09:45:27.860836   94931 command_runner.go:130] > # the cgroup blockio controller.
	I0115 09:45:27.860844   94931 command_runner.go:130] > # blockio_config_file = ""
	I0115 09:45:27.860858   94931 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0115 09:45:27.860868   94931 command_runner.go:130] > # irqbalance daemon.
	I0115 09:45:27.860879   94931 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0115 09:45:27.860892   94931 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0115 09:45:27.860903   94931 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:45:27.860910   94931 command_runner.go:130] > # rdt_config_file = ""
	I0115 09:45:27.860919   94931 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0115 09:45:27.860930   94931 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0115 09:45:27.860948   94931 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0115 09:45:27.860958   94931 command_runner.go:130] > # separate_pull_cgroup = ""
	I0115 09:45:27.860971   94931 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0115 09:45:27.860984   94931 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0115 09:45:27.860992   94931 command_runner.go:130] > # will be added.
	I0115 09:45:27.860996   94931 command_runner.go:130] > # default_capabilities = [
	I0115 09:45:27.861005   94931 command_runner.go:130] > # 	"CHOWN",
	I0115 09:45:27.861015   94931 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0115 09:45:27.861025   94931 command_runner.go:130] > # 	"FSETID",
	I0115 09:45:27.861032   94931 command_runner.go:130] > # 	"FOWNER",
	I0115 09:45:27.861042   94931 command_runner.go:130] > # 	"SETGID",
	I0115 09:45:27.861051   94931 command_runner.go:130] > # 	"SETUID",
	I0115 09:45:27.861060   94931 command_runner.go:130] > # 	"SETPCAP",
	I0115 09:45:27.861070   94931 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0115 09:45:27.861079   94931 command_runner.go:130] > # 	"KILL",
	I0115 09:45:27.861088   94931 command_runner.go:130] > # ]
	I0115 09:45:27.861117   94931 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0115 09:45:27.861131   94931 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0115 09:45:27.861143   94931 command_runner.go:130] > # add_inheritable_capabilities = true
	I0115 09:45:27.861157   94931 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0115 09:45:27.861170   94931 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 09:45:27.861180   94931 command_runner.go:130] > # default_sysctls = [
	I0115 09:45:27.861188   94931 command_runner.go:130] > # ]
	I0115 09:45:27.861198   94931 command_runner.go:130] > # List of devices on the host that a
	I0115 09:45:27.861207   94931 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0115 09:45:27.861216   94931 command_runner.go:130] > # allowed_devices = [
	I0115 09:45:27.861226   94931 command_runner.go:130] > # 	"/dev/fuse",
	I0115 09:45:27.861232   94931 command_runner.go:130] > # ]
	I0115 09:45:27.861248   94931 command_runner.go:130] > # List of additional devices. specified as
	I0115 09:45:27.861291   94931 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0115 09:45:27.861301   94931 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0115 09:45:27.861314   94931 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 09:45:27.861327   94931 command_runner.go:130] > # additional_devices = [
	I0115 09:45:27.861337   94931 command_runner.go:130] > # ]
	I0115 09:45:27.861349   94931 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0115 09:45:27.861359   94931 command_runner.go:130] > # cdi_spec_dirs = [
	I0115 09:45:27.861372   94931 command_runner.go:130] > # 	"/etc/cdi",
	I0115 09:45:27.861382   94931 command_runner.go:130] > # 	"/var/run/cdi",
	I0115 09:45:27.861390   94931 command_runner.go:130] > # ]
	I0115 09:45:27.861399   94931 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0115 09:45:27.861411   94931 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0115 09:45:27.861422   94931 command_runner.go:130] > # Defaults to false.
	I0115 09:45:27.861436   94931 command_runner.go:130] > # device_ownership_from_security_context = false
	I0115 09:45:27.861449   94931 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0115 09:45:27.861462   94931 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0115 09:45:27.861472   94931 command_runner.go:130] > # hooks_dir = [
	I0115 09:45:27.861483   94931 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0115 09:45:27.861490   94931 command_runner.go:130] > # ]
	I0115 09:45:27.861496   94931 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0115 09:45:27.861511   94931 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0115 09:45:27.861524   94931 command_runner.go:130] > # its default mounts from the following two files:
	I0115 09:45:27.861533   94931 command_runner.go:130] > #
	I0115 09:45:27.861546   94931 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0115 09:45:27.861560   94931 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0115 09:45:27.861574   94931 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0115 09:45:27.861583   94931 command_runner.go:130] > #
	I0115 09:45:27.861590   94931 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0115 09:45:27.861601   94931 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0115 09:45:27.861615   94931 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0115 09:45:27.861627   94931 command_runner.go:130] > #      only add mounts it finds in this file.
	I0115 09:45:27.861632   94931 command_runner.go:130] > #
	I0115 09:45:27.861643   94931 command_runner.go:130] > # default_mounts_file = ""
	I0115 09:45:27.861655   94931 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0115 09:45:27.861669   94931 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0115 09:45:27.861679   94931 command_runner.go:130] > # pids_limit = 0
	I0115 09:45:27.861691   94931 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0115 09:45:27.861700   94931 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0115 09:45:27.861718   94931 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0115 09:45:27.861735   94931 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0115 09:45:27.861745   94931 command_runner.go:130] > # log_size_max = -1
	I0115 09:45:27.861759   94931 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0115 09:45:27.861772   94931 command_runner.go:130] > # log_to_journald = false
	I0115 09:45:27.861789   94931 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0115 09:45:27.861797   94931 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0115 09:45:27.861806   94931 command_runner.go:130] > # Path to directory for container attach sockets.
	I0115 09:45:27.861818   94931 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0115 09:45:27.861830   94931 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0115 09:45:27.861841   94931 command_runner.go:130] > # bind_mount_prefix = ""
	I0115 09:45:27.861854   94931 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0115 09:45:27.861864   94931 command_runner.go:130] > # read_only = false
	I0115 09:45:27.861878   94931 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0115 09:45:27.861890   94931 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0115 09:45:27.861898   94931 command_runner.go:130] > # live configuration reload.
	I0115 09:45:27.861902   94931 command_runner.go:130] > # log_level = "info"
	I0115 09:45:27.861915   94931 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0115 09:45:27.861928   94931 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:45:27.861938   94931 command_runner.go:130] > # log_filter = ""
	I0115 09:45:27.861951   94931 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0115 09:45:27.861964   94931 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0115 09:45:27.861973   94931 command_runner.go:130] > # separated by comma.
	I0115 09:45:27.861986   94931 command_runner.go:130] > # uid_mappings = ""
	I0115 09:45:27.861995   94931 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0115 09:45:27.862007   94931 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0115 09:45:27.862018   94931 command_runner.go:130] > # separated by comma.
	I0115 09:45:27.862028   94931 command_runner.go:130] > # gid_mappings = ""
	I0115 09:45:27.862041   94931 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0115 09:45:27.862053   94931 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 09:45:27.862066   94931 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 09:45:27.862076   94931 command_runner.go:130] > # minimum_mappable_uid = -1
	I0115 09:45:27.862084   94931 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0115 09:45:27.862097   94931 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 09:45:27.862111   94931 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 09:45:27.862122   94931 command_runner.go:130] > # minimum_mappable_gid = -1
	I0115 09:45:27.862135   94931 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0115 09:45:27.862147   94931 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0115 09:45:27.862159   94931 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0115 09:45:27.862170   94931 command_runner.go:130] > # ctr_stop_timeout = 30
	I0115 09:45:27.862180   94931 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0115 09:45:27.862196   94931 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0115 09:45:27.862210   94931 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0115 09:45:27.862222   94931 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0115 09:45:27.862236   94931 command_runner.go:130] > # drop_infra_ctr = true
	I0115 09:45:27.862249   94931 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0115 09:45:27.862261   94931 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0115 09:45:27.862274   94931 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0115 09:45:27.862280   94931 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0115 09:45:27.862290   94931 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0115 09:45:27.862302   94931 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0115 09:45:27.862313   94931 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0115 09:45:27.862330   94931 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0115 09:45:27.862340   94931 command_runner.go:130] > # pinns_path = ""
	I0115 09:45:27.862353   94931 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0115 09:45:27.862366   94931 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0115 09:45:27.862385   94931 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0115 09:45:27.862396   94931 command_runner.go:130] > # default_runtime = "runc"
	I0115 09:45:27.862408   94931 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0115 09:45:27.862425   94931 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0115 09:45:27.862443   94931 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0115 09:45:27.862454   94931 command_runner.go:130] > # creation as a file is not desired either.
	I0115 09:45:27.862465   94931 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0115 09:45:27.862476   94931 command_runner.go:130] > # the hostname is being managed dynamically.
	I0115 09:45:27.862488   94931 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0115 09:45:27.862497   94931 command_runner.go:130] > # ]
	I0115 09:45:27.862511   94931 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0115 09:45:27.862524   94931 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0115 09:45:27.862538   94931 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0115 09:45:27.862551   94931 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0115 09:45:27.862557   94931 command_runner.go:130] > #
	I0115 09:45:27.862563   94931 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0115 09:45:27.862573   94931 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0115 09:45:27.862584   94931 command_runner.go:130] > #  runtime_type = "oci"
	I0115 09:45:27.862593   94931 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0115 09:45:27.862604   94931 command_runner.go:130] > #  privileged_without_host_devices = false
	I0115 09:45:27.862614   94931 command_runner.go:130] > #  allowed_annotations = []
	I0115 09:45:27.862627   94931 command_runner.go:130] > # Where:
	I0115 09:45:27.862639   94931 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0115 09:45:27.862652   94931 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0115 09:45:27.862663   94931 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0115 09:45:27.862676   94931 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0115 09:45:27.862686   94931 command_runner.go:130] > #   in $PATH.
	I0115 09:45:27.862697   94931 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0115 09:45:27.862714   94931 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0115 09:45:27.862727   94931 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0115 09:45:27.862736   94931 command_runner.go:130] > #   state.
	I0115 09:45:27.862749   94931 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0115 09:45:27.862758   94931 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0115 09:45:27.862772   94931 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0115 09:45:27.862785   94931 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0115 09:45:27.862799   94931 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0115 09:45:27.862812   94931 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0115 09:45:27.862823   94931 command_runner.go:130] > #   The currently recognized values are:
	I0115 09:45:27.862837   94931 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0115 09:45:27.862852   94931 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0115 09:45:27.862863   94931 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0115 09:45:27.862877   94931 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0115 09:45:27.862893   94931 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0115 09:45:27.862907   94931 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0115 09:45:27.862920   94931 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0115 09:45:27.862934   94931 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0115 09:45:27.862941   94931 command_runner.go:130] > #   should be moved to the container's cgroup
	I0115 09:45:27.862950   94931 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0115 09:45:27.862963   94931 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0115 09:45:27.862973   94931 command_runner.go:130] > runtime_type = "oci"
	I0115 09:45:27.862983   94931 command_runner.go:130] > runtime_root = "/run/runc"
	I0115 09:45:27.862993   94931 command_runner.go:130] > runtime_config_path = ""
	I0115 09:45:27.863003   94931 command_runner.go:130] > monitor_path = ""
	I0115 09:45:27.863013   94931 command_runner.go:130] > monitor_cgroup = ""
	I0115 09:45:27.863022   94931 command_runner.go:130] > monitor_exec_cgroup = ""
	I0115 09:45:27.863105   94931 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0115 09:45:27.863117   94931 command_runner.go:130] > # running containers
	I0115 09:45:27.863127   94931 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0115 09:45:27.863141   94931 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0115 09:45:27.863157   94931 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0115 09:45:27.863170   94931 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0115 09:45:27.863182   94931 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0115 09:45:27.863193   94931 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0115 09:45:27.863200   94931 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0115 09:45:27.863207   94931 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0115 09:45:27.863218   94931 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0115 09:45:27.863241   94931 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0115 09:45:27.863256   94931 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0115 09:45:27.863268   94931 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0115 09:45:27.863281   94931 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0115 09:45:27.863296   94931 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0115 09:45:27.863308   94931 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0115 09:45:27.863319   94931 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0115 09:45:27.863337   94931 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0115 09:45:27.863353   94931 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0115 09:45:27.863370   94931 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0115 09:45:27.863384   94931 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0115 09:45:27.863394   94931 command_runner.go:130] > # Example:
	I0115 09:45:27.863405   94931 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0115 09:45:27.863415   94931 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0115 09:45:27.863422   94931 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0115 09:45:27.863434   94931 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0115 09:45:27.863444   94931 command_runner.go:130] > # cpuset = 0
	I0115 09:45:27.863452   94931 command_runner.go:130] > # cpushares = "0-1"
	I0115 09:45:27.863462   94931 command_runner.go:130] > # Where:
	I0115 09:45:27.863473   94931 command_runner.go:130] > # The workload name is workload-type.
	I0115 09:45:27.863487   94931 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0115 09:45:27.863499   94931 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0115 09:45:27.863510   94931 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0115 09:45:27.863521   94931 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0115 09:45:27.863534   94931 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0115 09:45:27.863543   94931 command_runner.go:130] > # 
	I0115 09:45:27.863554   94931 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0115 09:45:27.863567   94931 command_runner.go:130] > #
	I0115 09:45:27.863582   94931 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0115 09:45:27.863594   94931 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0115 09:45:27.863608   94931 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0115 09:45:27.863617   94931 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0115 09:45:27.863629   94931 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0115 09:45:27.863638   94931 command_runner.go:130] > [crio.image]
	I0115 09:45:27.863649   94931 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0115 09:45:27.863660   94931 command_runner.go:130] > # default_transport = "docker://"
	I0115 09:45:27.863673   94931 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0115 09:45:27.863686   94931 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0115 09:45:27.863697   94931 command_runner.go:130] > # global_auth_file = ""
	I0115 09:45:27.863709   94931 command_runner.go:130] > # The image used to instantiate infra containers.
	I0115 09:45:27.863719   94931 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:45:27.863731   94931 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0115 09:45:27.863745   94931 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0115 09:45:27.863758   94931 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0115 09:45:27.863769   94931 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:45:27.863782   94931 command_runner.go:130] > # pause_image_auth_file = ""
	I0115 09:45:27.863795   94931 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0115 09:45:27.863807   94931 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0115 09:45:27.863816   94931 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0115 09:45:27.863828   94931 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0115 09:45:27.863846   94931 command_runner.go:130] > # pause_command = "/pause"
	I0115 09:45:27.863859   94931 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0115 09:45:27.863873   94931 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0115 09:45:27.863887   94931 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0115 09:45:27.863900   94931 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0115 09:45:27.863909   94931 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0115 09:45:27.863916   94931 command_runner.go:130] > # signature_policy = ""
	I0115 09:45:27.863932   94931 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0115 09:45:27.863945   94931 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0115 09:45:27.863956   94931 command_runner.go:130] > # changing them here.
	I0115 09:45:27.863966   94931 command_runner.go:130] > # insecure_registries = [
	I0115 09:45:27.863975   94931 command_runner.go:130] > # ]
	I0115 09:45:27.863988   94931 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0115 09:45:27.864004   94931 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0115 09:45:27.864014   94931 command_runner.go:130] > # image_volumes = "mkdir"
	I0115 09:45:27.864024   94931 command_runner.go:130] > # Temporary directory to use for storing big files
	I0115 09:45:27.864035   94931 command_runner.go:130] > # big_files_temporary_dir = ""
	I0115 09:45:27.864048   94931 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0115 09:45:27.864059   94931 command_runner.go:130] > # CNI plugins.
	I0115 09:45:27.864068   94931 command_runner.go:130] > [crio.network]
	I0115 09:45:27.864080   94931 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0115 09:45:27.864092   94931 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0115 09:45:27.864102   94931 command_runner.go:130] > # cni_default_network = ""
	I0115 09:45:27.864113   94931 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0115 09:45:27.864120   94931 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0115 09:45:27.864133   94931 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0115 09:45:27.864143   94931 command_runner.go:130] > # plugin_dirs = [
	I0115 09:45:27.864150   94931 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0115 09:45:27.864160   94931 command_runner.go:130] > # ]
	I0115 09:45:27.864172   94931 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0115 09:45:27.864182   94931 command_runner.go:130] > [crio.metrics]
	I0115 09:45:27.864196   94931 command_runner.go:130] > # Globally enable or disable metrics support.
	I0115 09:45:27.864206   94931 command_runner.go:130] > # enable_metrics = false
	I0115 09:45:27.864214   94931 command_runner.go:130] > # Specify enabled metrics collectors.
	I0115 09:45:27.864224   94931 command_runner.go:130] > # Per default all metrics are enabled.
	I0115 09:45:27.864237   94931 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0115 09:45:27.864250   94931 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0115 09:45:27.864263   94931 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0115 09:45:27.864274   94931 command_runner.go:130] > # metrics_collectors = [
	I0115 09:45:27.864284   94931 command_runner.go:130] > # 	"operations",
	I0115 09:45:27.864295   94931 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0115 09:45:27.864305   94931 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0115 09:45:27.864314   94931 command_runner.go:130] > # 	"operations_errors",
	I0115 09:45:27.864321   94931 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0115 09:45:27.864328   94931 command_runner.go:130] > # 	"image_pulls_by_name",
	I0115 09:45:27.864339   94931 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0115 09:45:27.864350   94931 command_runner.go:130] > # 	"image_pulls_failures",
	I0115 09:45:27.864357   94931 command_runner.go:130] > # 	"image_pulls_successes",
	I0115 09:45:27.864369   94931 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0115 09:45:27.864383   94931 command_runner.go:130] > # 	"image_layer_reuse",
	I0115 09:45:27.864393   94931 command_runner.go:130] > # 	"containers_oom_total",
	I0115 09:45:27.864403   94931 command_runner.go:130] > # 	"containers_oom",
	I0115 09:45:27.864413   94931 command_runner.go:130] > # 	"processes_defunct",
	I0115 09:45:27.864420   94931 command_runner.go:130] > # 	"operations_total",
	I0115 09:45:27.864424   94931 command_runner.go:130] > # 	"operations_latency_seconds",
	I0115 09:45:27.864435   94931 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0115 09:45:27.864447   94931 command_runner.go:130] > # 	"operations_errors_total",
	I0115 09:45:27.864458   94931 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0115 09:45:27.864469   94931 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0115 09:45:27.864485   94931 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0115 09:45:27.864495   94931 command_runner.go:130] > # 	"image_pulls_success_total",
	I0115 09:45:27.864506   94931 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0115 09:45:27.864514   94931 command_runner.go:130] > # 	"containers_oom_count_total",
	I0115 09:45:27.864520   94931 command_runner.go:130] > # ]
	I0115 09:45:27.864529   94931 command_runner.go:130] > # The port on which the metrics server will listen.
	I0115 09:45:27.864539   94931 command_runner.go:130] > # metrics_port = 9090
	I0115 09:45:27.864551   94931 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0115 09:45:27.864565   94931 command_runner.go:130] > # metrics_socket = ""
	I0115 09:45:27.864576   94931 command_runner.go:130] > # The certificate for the secure metrics server.
	I0115 09:45:27.864590   94931 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0115 09:45:27.864601   94931 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0115 09:45:27.864608   94931 command_runner.go:130] > # certificate on any modification event.
	I0115 09:45:27.864616   94931 command_runner.go:130] > # metrics_cert = ""
	I0115 09:45:27.864627   94931 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0115 09:45:27.864637   94931 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0115 09:45:27.864647   94931 command_runner.go:130] > # metrics_key = ""
	I0115 09:45:27.864659   94931 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0115 09:45:27.864669   94931 command_runner.go:130] > [crio.tracing]
	I0115 09:45:27.864681   94931 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0115 09:45:27.864691   94931 command_runner.go:130] > # enable_tracing = false
	I0115 09:45:27.864699   94931 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0115 09:45:27.864712   94931 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0115 09:45:27.864724   94931 command_runner.go:130] > # Number of samples to collect per million spans.
	I0115 09:45:27.864736   94931 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0115 09:45:27.864748   94931 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0115 09:45:27.864762   94931 command_runner.go:130] > [crio.stats]
	I0115 09:45:27.864775   94931 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0115 09:45:27.864784   94931 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0115 09:45:27.864790   94931 command_runner.go:130] > # stats_collection_period = 0
	I0115 09:45:27.866232   94931 command_runner.go:130] ! time="2024-01-15 09:45:27.857025618Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0115 09:45:27.866259   94931 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0115 09:45:27.866365   94931 cni.go:84] Creating CNI manager for ""
	I0115 09:45:27.866383   94931 cni.go:136] 1 nodes found, recommending kindnet
	I0115 09:45:27.866405   94931 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 09:45:27.866426   94931 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-218062 NodeName:multinode-218062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 09:45:27.866592   94931 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-218062"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 09:45:27.866673   94931 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-218062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-218062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 09:45:27.866743   94931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 09:45:27.874067   94931 command_runner.go:130] > kubeadm
	I0115 09:45:27.874081   94931 command_runner.go:130] > kubectl
	I0115 09:45:27.874085   94931 command_runner.go:130] > kubelet
	I0115 09:45:27.874775   94931 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 09:45:27.874831   94931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 09:45:27.882445   94931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0115 09:45:27.898478   94931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 09:45:27.914306   94931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0115 09:45:27.930251   94931 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0115 09:45:27.933436   94931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:45:27.943404   94931 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062 for IP: 192.168.58.2
	I0115 09:45:27.943441   94931 certs.go:190] acquiring lock for shared ca certs: {Name:mk436e7b36fef987bcfd7cb65df7b354c02b1a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:45:27.943580   94931 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key
	I0115 09:45:27.943621   94931 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key
	I0115 09:45:27.943663   94931 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.key
	I0115 09:45:27.943679   94931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.crt with IP's: []
	I0115 09:45:27.998922   94931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.crt ...
	I0115 09:45:27.998954   94931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.crt: {Name:mk517522ede87d53d25d9a5988a45e13e2d53496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:45:27.999113   94931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.key ...
	I0115 09:45:27.999127   94931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.key: {Name:mk7ff2789a750caf767fe0545850550425c313ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:45:27.999193   94931 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.key.cee25041
	I0115 09:45:27.999206   94931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 09:45:28.208730   94931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.crt.cee25041 ...
	I0115 09:45:28.208764   94931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.crt.cee25041: {Name:mk30c34d47c92115c734a0d6437a718f89121f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:45:28.208910   94931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.key.cee25041 ...
	I0115 09:45:28.208922   94931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.key.cee25041: {Name:mka658aa99e948058705fa0842b5d52a7693a025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:45:28.208990   94931 certs.go:337] copying /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.crt
	I0115 09:45:28.209070   94931 certs.go:341] copying /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.key
	I0115 09:45:28.209186   94931 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/proxy-client.key
	I0115 09:45:28.209204   94931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/proxy-client.crt with IP's: []
	I0115 09:45:28.418625   94931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/proxy-client.crt ...
	I0115 09:45:28.418656   94931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/proxy-client.crt: {Name:mk1da45a551f76744145e12133b63ce21c2596cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:45:28.418799   94931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/proxy-client.key ...
	I0115 09:45:28.418811   94931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/proxy-client.key: {Name:mkfc09c178c2ff8b47f752323adc5e7ce4cab8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:45:28.418875   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 09:45:28.418892   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 09:45:28.418908   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 09:45:28.418924   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 09:45:28.418936   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 09:45:28.418949   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 09:45:28.418963   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 09:45:28.418976   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 09:45:28.419026   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825.pem (1338 bytes)
	W0115 09:45:28.419058   94931 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825_empty.pem, impossibly tiny 0 bytes
	I0115 09:45:28.419091   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem (1675 bytes)
	I0115 09:45:28.419121   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem (1082 bytes)
	I0115 09:45:28.419147   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem (1123 bytes)
	I0115 09:45:28.419173   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem (1679 bytes)
	I0115 09:45:28.419214   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem (1708 bytes)
	I0115 09:45:28.419244   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:45:28.419257   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825.pem -> /usr/share/ca-certificates/11825.pem
	I0115 09:45:28.419268   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem -> /usr/share/ca-certificates/118252.pem
	I0115 09:45:28.419840   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 09:45:28.441043   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 09:45:28.464343   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 09:45:28.485550   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 09:45:28.506757   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 09:45:28.527997   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 09:45:28.548795   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 09:45:28.569982   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 09:45:28.591675   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 09:45:28.612553   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825.pem --> /usr/share/ca-certificates/11825.pem (1338 bytes)
	I0115 09:45:28.633351   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem --> /usr/share/ca-certificates/118252.pem (1708 bytes)
	I0115 09:45:28.653469   94931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 09:45:28.668353   94931 ssh_runner.go:195] Run: openssl version
	I0115 09:45:28.672806   94931 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0115 09:45:28.673028   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 09:45:28.681031   94931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:45:28.683927   94931 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:45:28.683964   94931 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:45:28.684005   94931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:45:28.689810   94931 command_runner.go:130] > b5213941
	I0115 09:45:28.690034   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 09:45:28.698038   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11825.pem && ln -fs /usr/share/ca-certificates/11825.pem /etc/ssl/certs/11825.pem"
	I0115 09:45:28.705791   94931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11825.pem
	I0115 09:45:28.708636   94931 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 15 09:33 /usr/share/ca-certificates/11825.pem
	I0115 09:45:28.708664   94931 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:33 /usr/share/ca-certificates/11825.pem
	I0115 09:45:28.708701   94931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11825.pem
	I0115 09:45:28.714448   94931 command_runner.go:130] > 51391683
	I0115 09:45:28.714633   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11825.pem /etc/ssl/certs/51391683.0"
	I0115 09:45:28.722535   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118252.pem && ln -fs /usr/share/ca-certificates/118252.pem /etc/ssl/certs/118252.pem"
	I0115 09:45:28.730514   94931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118252.pem
	I0115 09:45:28.733596   94931 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 15 09:33 /usr/share/ca-certificates/118252.pem
	I0115 09:45:28.733616   94931 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:33 /usr/share/ca-certificates/118252.pem
	I0115 09:45:28.733652   94931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118252.pem
	I0115 09:45:28.739415   94931 command_runner.go:130] > 3ec20f2e
	I0115 09:45:28.739615   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118252.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 09:45:28.747594   94931 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 09:45:28.750563   94931 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:45:28.750612   94931 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:45:28.750666   94931 kubeadm.go:404] StartCluster: {Name:multinode-218062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-218062 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:45:28.750760   94931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 09:45:28.750827   94931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 09:45:28.782611   94931 cri.go:89] found id: ""
	I0115 09:45:28.782670   94931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 09:45:28.789953   94931 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0115 09:45:28.789983   94931 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0115 09:45:28.789993   94931 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0115 09:45:28.790740   94931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 09:45:28.798594   94931 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 09:45:28.798647   94931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 09:45:28.806190   94931 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0115 09:45:28.806225   94931 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0115 09:45:28.806235   94931 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0115 09:45:28.806246   94931 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 09:45:28.806285   94931 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 09:45:28.806329   94931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 09:45:28.850766   94931 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0115 09:45:28.850821   94931 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0115 09:45:28.850901   94931 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 09:45:28.850917   94931 command_runner.go:130] > [preflight] Running pre-flight checks
	I0115 09:45:28.886409   94931 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0115 09:45:28.886440   94931 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0115 09:45:28.886521   94931 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-gcp
	I0115 09:45:28.886537   94931 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1048-gcp
	I0115 09:45:28.886580   94931 kubeadm.go:322] OS: Linux
	I0115 09:45:28.886609   94931 command_runner.go:130] > OS: Linux
	I0115 09:45:28.886696   94931 kubeadm.go:322] CGROUPS_CPU: enabled
	I0115 09:45:28.886711   94931 command_runner.go:130] > CGROUPS_CPU: enabled
	I0115 09:45:28.886779   94931 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0115 09:45:28.886790   94931 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0115 09:45:28.886857   94931 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0115 09:45:28.886867   94931 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0115 09:45:28.886941   94931 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0115 09:45:28.886962   94931 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0115 09:45:28.887038   94931 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0115 09:45:28.887051   94931 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0115 09:45:28.887137   94931 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0115 09:45:28.887149   94931 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0115 09:45:28.887219   94931 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0115 09:45:28.887261   94931 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0115 09:45:28.887327   94931 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0115 09:45:28.887337   94931 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0115 09:45:28.887394   94931 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0115 09:45:28.887404   94931 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0115 09:45:28.949626   94931 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 09:45:28.949655   94931 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 09:45:28.949760   94931 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 09:45:28.949772   94931 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 09:45:28.949910   94931 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 09:45:28.949943   94931 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 09:45:29.142863   94931 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 09:45:29.142898   94931 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 09:45:29.146568   94931 out.go:204]   - Generating certificates and keys ...
	I0115 09:45:29.146684   94931 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0115 09:45:29.146702   94931 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 09:45:29.146817   94931 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0115 09:45:29.146832   94931 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 09:45:29.320343   94931 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 09:45:29.320390   94931 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 09:45:29.457592   94931 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 09:45:29.457621   94931 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0115 09:45:29.703454   94931 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 09:45:29.703485   94931 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0115 09:45:30.075670   94931 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 09:45:30.075729   94931 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0115 09:45:30.187839   94931 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 09:45:30.187872   94931 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0115 09:45:30.188037   94931 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-218062] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0115 09:45:30.188069   94931 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-218062] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0115 09:45:30.550525   94931 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 09:45:30.550573   94931 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0115 09:45:30.550707   94931 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-218062] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0115 09:45:30.550722   94931 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-218062] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0115 09:45:30.666659   94931 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 09:45:30.666697   94931 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 09:45:30.864590   94931 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 09:45:30.864619   94931 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 09:45:31.245813   94931 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 09:45:31.245870   94931 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0115 09:45:31.245953   94931 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 09:45:31.245967   94931 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 09:45:31.513268   94931 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 09:45:31.513300   94931 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 09:45:31.582806   94931 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 09:45:31.582836   94931 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 09:45:31.671768   94931 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 09:45:31.671781   94931 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 09:45:31.965154   94931 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 09:45:31.965212   94931 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 09:45:31.965608   94931 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 09:45:31.965635   94931 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 09:45:31.967869   94931 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 09:45:31.967893   94931 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 09:45:31.970257   94931 out.go:204]   - Booting up control plane ...
	I0115 09:45:31.970390   94931 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 09:45:31.970407   94931 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 09:45:31.970480   94931 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 09:45:31.970489   94931 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 09:45:31.970571   94931 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 09:45:31.970583   94931 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 09:45:31.978351   94931 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 09:45:31.978385   94931 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 09:45:31.979053   94931 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 09:45:31.979068   94931 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 09:45:31.979097   94931 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 09:45:31.979120   94931 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0115 09:45:32.055911   94931 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 09:45:32.055950   94931 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 09:45:36.557933   94931 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.502115 seconds
	I0115 09:45:36.557962   94931 command_runner.go:130] > [apiclient] All control plane components are healthy after 4.502115 seconds
	I0115 09:45:36.558112   94931 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 09:45:36.558124   94931 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 09:45:36.569699   94931 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 09:45:36.569724   94931 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 09:45:37.089638   94931 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 09:45:37.089670   94931 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0115 09:45:37.089956   94931 kubeadm.go:322] [mark-control-plane] Marking the node multinode-218062 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 09:45:37.089971   94931 command_runner.go:130] > [mark-control-plane] Marking the node multinode-218062 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 09:45:37.598895   94931 kubeadm.go:322] [bootstrap-token] Using token: khibdb.i9aa6itnrcuhq9bl
	I0115 09:45:37.600444   94931 out.go:204]   - Configuring RBAC rules ...
	I0115 09:45:37.598945   94931 command_runner.go:130] > [bootstrap-token] Using token: khibdb.i9aa6itnrcuhq9bl
	I0115 09:45:37.600571   94931 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 09:45:37.600589   94931 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 09:45:37.604700   94931 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 09:45:37.604739   94931 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 09:45:37.612292   94931 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 09:45:37.612313   94931 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 09:45:37.615182   94931 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 09:45:37.615200   94931 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 09:45:37.618078   94931 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 09:45:37.618097   94931 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 09:45:37.620816   94931 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 09:45:37.620833   94931 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 09:45:37.631364   94931 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 09:45:37.631388   94931 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 09:45:37.851647   94931 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 09:45:37.851672   94931 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0115 09:45:38.032845   94931 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 09:45:38.032872   94931 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0115 09:45:38.034083   94931 kubeadm.go:322] 
	I0115 09:45:38.034170   94931 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 09:45:38.034185   94931 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0115 09:45:38.034194   94931 kubeadm.go:322] 
	I0115 09:45:38.034282   94931 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 09:45:38.034293   94931 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0115 09:45:38.034299   94931 kubeadm.go:322] 
	I0115 09:45:38.034330   94931 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 09:45:38.034337   94931 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0115 09:45:38.034401   94931 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 09:45:38.034409   94931 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 09:45:38.034474   94931 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 09:45:38.034481   94931 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 09:45:38.034487   94931 kubeadm.go:322] 
	I0115 09:45:38.034560   94931 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0115 09:45:38.034565   94931 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0115 09:45:38.034568   94931 kubeadm.go:322] 
	I0115 09:45:38.034606   94931 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 09:45:38.034609   94931 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 09:45:38.034612   94931 kubeadm.go:322] 
	I0115 09:45:38.034650   94931 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 09:45:38.034654   94931 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0115 09:45:38.034716   94931 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 09:45:38.034720   94931 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 09:45:38.034769   94931 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 09:45:38.034773   94931 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 09:45:38.034776   94931 kubeadm.go:322] 
	I0115 09:45:38.034855   94931 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 09:45:38.034860   94931 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0115 09:45:38.034921   94931 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 09:45:38.034924   94931 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0115 09:45:38.034927   94931 kubeadm.go:322] 
	I0115 09:45:38.034988   94931 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token khibdb.i9aa6itnrcuhq9bl \
	I0115 09:45:38.034992   94931 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token khibdb.i9aa6itnrcuhq9bl \
	I0115 09:45:38.035067   94931 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 \
	I0115 09:45:38.035070   94931 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 \
	I0115 09:45:38.035086   94931 kubeadm.go:322] 	--control-plane 
	I0115 09:45:38.035089   94931 command_runner.go:130] > 	--control-plane 
	I0115 09:45:38.035092   94931 kubeadm.go:322] 
	I0115 09:45:38.035153   94931 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 09:45:38.035157   94931 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0115 09:45:38.035160   94931 kubeadm.go:322] 
	I0115 09:45:38.035220   94931 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token khibdb.i9aa6itnrcuhq9bl \
	I0115 09:45:38.035224   94931 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token khibdb.i9aa6itnrcuhq9bl \
	I0115 09:45:38.035301   94931 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 
	I0115 09:45:38.035307   94931 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 
	I0115 09:45:38.037778   94931 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0115 09:45:38.037796   94931 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0115 09:45:38.037929   94931 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 09:45:38.037953   94931 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 09:45:38.037970   94931 cni.go:84] Creating CNI manager for ""
	I0115 09:45:38.037980   94931 cni.go:136] 1 nodes found, recommending kindnet
	I0115 09:45:38.039899   94931 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 09:45:38.041343   94931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 09:45:38.045126   94931 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0115 09:45:38.045154   94931 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I0115 09:45:38.045163   94931 command_runner.go:130] > Device: 37h/55d	Inode: 555949      Links: 1
	I0115 09:45:38.045172   94931 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 09:45:38.045181   94931 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0115 09:45:38.045190   94931 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0115 09:45:38.045202   94931 command_runner.go:130] > Change: 2024-01-15 09:26:53.774860876 +0000
	I0115 09:45:38.045211   94931 command_runner.go:130] >  Birth: 2024-01-15 09:26:53.750859235 +0000
	I0115 09:45:38.045271   94931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 09:45:38.045285   94931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 09:45:38.063669   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 09:45:38.740241   94931 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0115 09:45:38.746967   94931 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0115 09:45:38.754936   94931 command_runner.go:130] > serviceaccount/kindnet created
	I0115 09:45:38.764254   94931 command_runner.go:130] > daemonset.apps/kindnet created
	I0115 09:45:38.768440   94931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 09:45:38.768529   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:38.768541   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=multinode-218062 minikube.k8s.io/updated_at=2024_01_15T09_45_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:38.842613   94931 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0115 09:45:38.846197   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:38.846237   94931 command_runner.go:130] > -16
	I0115 09:45:38.846277   94931 ops.go:34] apiserver oom_adj: -16
	I0115 09:45:38.852967   94931 command_runner.go:130] > node/multinode-218062 labeled
	I0115 09:45:38.912803   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:39.346301   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:39.407465   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:39.846558   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:39.908510   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:40.346476   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:40.409993   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:40.846610   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:40.909256   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:41.346332   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:41.409227   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:41.846475   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:41.910651   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:42.347236   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:42.411508   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:42.847216   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:42.910995   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:43.346555   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:43.412450   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:43.847137   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:43.911346   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:44.346916   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:44.413582   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:44.847169   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:44.907941   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:45.347011   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:45.408007   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:45.846535   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:45.910069   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:46.346379   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:46.411622   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:46.847257   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:46.912228   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:47.347109   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:47.409224   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:47.846355   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:47.911760   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:48.346427   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:48.412502   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:48.846843   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:48.908848   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:49.346926   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:49.408034   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:49.846314   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:49.909019   94931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 09:45:50.346342   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:45:50.412383   94931 command_runner.go:130] > NAME      SECRETS   AGE
	I0115 09:45:50.412407   94931 command_runner.go:130] > default   0         0s
	I0115 09:45:50.412445   94931 kubeadm.go:1088] duration metric: took 11.643998072s to wait for elevateKubeSystemPrivileges.
	I0115 09:45:50.412469   94931 kubeadm.go:406] StartCluster complete in 21.661804702s
	I0115 09:45:50.412499   94931 settings.go:142] acquiring lock: {Name:mkbf6aded3b549fa4f3ab1cad294a9ebed536616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:45:50.412583   94931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:45:50.413272   94931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/kubeconfig: {Name:mk31241d29ab70870dc379ecd59996acb9413d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:45:50.413505   94931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 09:45:50.413591   94931 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 09:45:50.413670   94931 addons.go:69] Setting storage-provisioner=true in profile "multinode-218062"
	I0115 09:45:50.413694   94931 addons.go:234] Setting addon storage-provisioner=true in "multinode-218062"
	I0115 09:45:50.413695   94931 addons.go:69] Setting default-storageclass=true in profile "multinode-218062"
	I0115 09:45:50.413724   94931 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-218062"
	I0115 09:45:50.413759   94931 config.go:182] Loaded profile config "multinode-218062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:45:50.413763   94931 host.go:66] Checking if "multinode-218062" exists ...
	I0115 09:45:50.413893   94931 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:45:50.414156   94931 cli_runner.go:164] Run: docker container inspect multinode-218062 --format={{.State.Status}}
	I0115 09:45:50.414143   94931 kapi.go:59] client config for multinode-218062: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.key", CAFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:45:50.414322   94931 cli_runner.go:164] Run: docker container inspect multinode-218062 --format={{.State.Status}}
	I0115 09:45:50.414868   94931 cert_rotation.go:137] Starting client certificate rotation controller
	I0115 09:45:50.415115   94931 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 09:45:50.415132   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:50.415139   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:50.415145   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:50.425620   94931 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0115 09:45:50.425641   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:50.425648   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:50.425653   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:50.425659   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:50.425664   94931 round_trippers.go:580]     Content-Length: 291
	I0115 09:45:50.425669   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:50 GMT
	I0115 09:45:50.425674   94931 round_trippers.go:580]     Audit-Id: 77138a61-ca73-4a50-9106-6c7ea9a36763
	I0115 09:45:50.425683   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:50.425709   94931 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"22ec091e-f06f-49a1-8fda-0f72e5d1c41b","resourceVersion":"268","creationTimestamp":"2024-01-15T09:45:37Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0115 09:45:50.426218   94931 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"22ec091e-f06f-49a1-8fda-0f72e5d1c41b","resourceVersion":"268","creationTimestamp":"2024-01-15T09:45:37Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0115 09:45:50.426272   94931 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 09:45:50.426280   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:50.426288   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:50.426297   94931 round_trippers.go:473]     Content-Type: application/json
	I0115 09:45:50.426308   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:50.432104   94931 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 09:45:50.432131   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:50.432142   94931 round_trippers.go:580]     Content-Length: 291
	I0115 09:45:50.432150   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:50 GMT
	I0115 09:45:50.432173   94931 round_trippers.go:580]     Audit-Id: 441b5762-833f-4e4b-8ba7-9ed599e71e62
	I0115 09:45:50.432185   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:50.432197   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:50.432210   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:50.432222   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:50.432269   94931 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"22ec091e-f06f-49a1-8fda-0f72e5d1c41b","resourceVersion":"343","creationTimestamp":"2024-01-15T09:45:37Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0115 09:45:50.435009   94931 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:45:50.437389   94931 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 09:45:50.435380   94931 kapi.go:59] client config for multinode-218062: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.key", CAFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:45:50.439092   94931 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:45:50.439117   94931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 09:45:50.439186   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:45:50.439230   94931 addons.go:234] Setting addon default-storageclass=true in "multinode-218062"
	I0115 09:45:50.439271   94931 host.go:66] Checking if "multinode-218062" exists ...
	I0115 09:45:50.439747   94931 cli_runner.go:164] Run: docker container inspect multinode-218062 --format={{.State.Status}}
	I0115 09:45:50.458364   94931 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 09:45:50.458389   94931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 09:45:50.458445   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:45:50.459126   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa Username:docker}
	I0115 09:45:50.477522   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa Username:docker}
	I0115 09:45:50.492736   94931 command_runner.go:130] > apiVersion: v1
	I0115 09:45:50.492760   94931 command_runner.go:130] > data:
	I0115 09:45:50.492765   94931 command_runner.go:130] >   Corefile: |
	I0115 09:45:50.492769   94931 command_runner.go:130] >     .:53 {
	I0115 09:45:50.492773   94931 command_runner.go:130] >         errors
	I0115 09:45:50.492777   94931 command_runner.go:130] >         health {
	I0115 09:45:50.492785   94931 command_runner.go:130] >            lameduck 5s
	I0115 09:45:50.492791   94931 command_runner.go:130] >         }
	I0115 09:45:50.492797   94931 command_runner.go:130] >         ready
	I0115 09:45:50.492806   94931 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0115 09:45:50.492814   94931 command_runner.go:130] >            pods insecure
	I0115 09:45:50.492827   94931 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0115 09:45:50.492839   94931 command_runner.go:130] >            ttl 30
	I0115 09:45:50.492847   94931 command_runner.go:130] >         }
	I0115 09:45:50.492854   94931 command_runner.go:130] >         prometheus :9153
	I0115 09:45:50.492865   94931 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0115 09:45:50.492878   94931 command_runner.go:130] >            max_concurrent 1000
	I0115 09:45:50.492887   94931 command_runner.go:130] >         }
	I0115 09:45:50.492894   94931 command_runner.go:130] >         cache 30
	I0115 09:45:50.492904   94931 command_runner.go:130] >         loop
	I0115 09:45:50.492911   94931 command_runner.go:130] >         reload
	I0115 09:45:50.492922   94931 command_runner.go:130] >         loadbalance
	I0115 09:45:50.492928   94931 command_runner.go:130] >     }
	I0115 09:45:50.492937   94931 command_runner.go:130] > kind: ConfigMap
	I0115 09:45:50.492942   94931 command_runner.go:130] > metadata:
	I0115 09:45:50.492949   94931 command_runner.go:130] >   creationTimestamp: "2024-01-15T09:45:37Z"
	I0115 09:45:50.492956   94931 command_runner.go:130] >   name: coredns
	I0115 09:45:50.492960   94931 command_runner.go:130] >   namespace: kube-system
	I0115 09:45:50.492967   94931 command_runner.go:130] >   resourceVersion: "264"
	I0115 09:45:50.492972   94931 command_runner.go:130] >   uid: 01c0b0b2-05ad-4456-932a-606c59a31bf1
	I0115 09:45:50.493188   94931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 09:45:50.568754   94931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 09:45:50.638129   94931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 09:45:50.915833   94931 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 09:45:50.915862   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:50.915870   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:50.915877   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:50.930657   94931 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0115 09:45:50.930692   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:50.930704   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:50.930712   94931 round_trippers.go:580]     Content-Length: 291
	I0115 09:45:50.930719   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:50 GMT
	I0115 09:45:50.930727   94931 round_trippers.go:580]     Audit-Id: 9b7a1342-4b62-431a-89fe-18a4c54c3b0f
	I0115 09:45:50.930734   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:50.930741   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:50.930749   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:50.930782   94931 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"22ec091e-f06f-49a1-8fda-0f72e5d1c41b","resourceVersion":"343","creationTimestamp":"2024-01-15T09:45:37Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0115 09:45:50.930895   94931 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-218062" context rescaled to 1 replicas
	I0115 09:45:50.930925   94931 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 09:45:50.934392   94931 out.go:177] * Verifying Kubernetes components...
	I0115 09:45:50.936038   94931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:45:51.135404   94931 command_runner.go:130] > configmap/coredns replaced
	I0115 09:45:51.141132   94931 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0115 09:45:51.585620   94931 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0115 09:45:51.590463   94931 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0115 09:45:51.597283   94931 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0115 09:45:51.626520   94931 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0115 09:45:51.635122   94931 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0115 09:45:51.645473   94931 command_runner.go:130] > pod/storage-provisioner created
	I0115 09:45:51.650611   94931 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.081806199s)
	I0115 09:45:51.650680   94931 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0115 09:45:51.650699   94931 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.012483675s)
	I0115 09:45:51.650813   94931 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0115 09:45:51.650827   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:51.650838   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:51.650847   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:51.651294   94931 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:45:51.651559   94931 kapi.go:59] client config for multinode-218062: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.key", CAFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:45:51.651828   94931 node_ready.go:35] waiting up to 6m0s for node "multinode-218062" to be "Ready" ...
	I0115 09:45:51.651919   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:51.651928   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:51.651939   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:51.651953   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:51.652854   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:51.652872   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:51.652879   94931 round_trippers.go:580]     Audit-Id: 9d432b1e-4e84-4d27-86fe-42fc808bf132
	I0115 09:45:51.652885   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:51.652890   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:51.652906   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:51.652918   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:51.652931   94931 round_trippers.go:580]     Content-Length: 1273
	I0115 09:45:51.652942   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:51 GMT
	I0115 09:45:51.652984   94931 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"standard","uid":"273ffb2a-19de-4658-ba63-23ce85a4a0a2","resourceVersion":"391","creationTimestamp":"2024-01-15T09:45:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-15T09:45:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0115 09:45:51.653439   94931 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"273ffb2a-19de-4658-ba63-23ce85a4a0a2","resourceVersion":"391","creationTimestamp":"2024-01-15T09:45:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-15T09:45:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0115 09:45:51.653488   94931 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0115 09:45:51.653499   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:51.653510   94931 round_trippers.go:473]     Content-Type: application/json
	I0115 09:45:51.653519   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:51.653533   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:51.654070   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:51.654118   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:51.654133   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:51.654143   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:51 GMT
	I0115 09:45:51.654152   94931 round_trippers.go:580]     Audit-Id: fa7b2c90-99e6-4afd-acbe-6467665cb263
	I0115 09:45:51.654165   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:51.654177   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:51.654189   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:51.654359   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"338","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0115 09:45:51.656186   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:51.656209   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:51.656220   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:51.656230   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:51.656240   94931 round_trippers.go:580]     Content-Length: 1220
	I0115 09:45:51.656252   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:51 GMT
	I0115 09:45:51.656262   94931 round_trippers.go:580]     Audit-Id: c57f3d17-6698-44ad-a062-892038f1d4f5
	I0115 09:45:51.656274   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:51.656288   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:51.656317   94931 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"273ffb2a-19de-4658-ba63-23ce85a4a0a2","resourceVersion":"391","creationTimestamp":"2024-01-15T09:45:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-15T09:45:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0115 09:45:51.658377   94931 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0115 09:45:51.659886   94931 addons.go:505] enable addons completed in 1.24629198s: enabled=[storage-provisioner default-storageclass]
	I0115 09:45:52.152961   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:52.152990   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:52.152997   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:52.153003   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:52.155329   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:52.155351   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:52.155357   94931 round_trippers.go:580]     Audit-Id: 6bc5a0be-0bdf-4581-8562-0b4ac7524a3c
	I0115 09:45:52.155364   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:52.155373   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:52.155383   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:52.155391   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:52.155406   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:52 GMT
	I0115 09:45:52.155530   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"338","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0115 09:45:52.652137   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:52.652161   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:52.652169   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:52.652175   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:52.654583   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:52.654609   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:52.654619   94931 round_trippers.go:580]     Audit-Id: e5b1669e-838f-4e04-a881-4bd8d6dbc146
	I0115 09:45:52.654628   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:52.654639   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:52.654647   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:52.654656   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:52.654669   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:52 GMT
	I0115 09:45:52.654810   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"338","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0115 09:45:53.152336   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:53.152408   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:53.152429   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:53.152450   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:53.156543   94931 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 09:45:53.156611   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:53.156634   94931 round_trippers.go:580]     Audit-Id: 886ab142-2833-4c7b-8242-fa484b0fbf29
	I0115 09:45:53.156652   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:53.156669   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:53.156685   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:53.156707   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:53.156725   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:53 GMT
	I0115 09:45:53.156872   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:45:53.157321   94931 node_ready.go:49] node "multinode-218062" has status "Ready":"True"
	I0115 09:45:53.157373   94931 node_ready.go:38] duration metric: took 1.505520019s waiting for node "multinode-218062" to be "Ready" ...
	I0115 09:45:53.157398   94931 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:45:53.157494   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 09:45:53.157532   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:53.157551   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:53.157567   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:53.160552   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:53.160620   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:53.160635   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:53.160648   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:53.160659   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:53.160674   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:53.160685   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:53 GMT
	I0115 09:45:53.160696   94931 round_trippers.go:580]     Audit-Id: ef69b28a-6df3-4ed2-86a8-f021b16ddce6
	I0115 09:45:53.161032   94931 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q8r7r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"08d15645-f87b-4962-ac37-afaa15661146","resourceVersion":"418","creationTimestamp":"2024-01-15T09:45:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81ef8bfc-3a80-4670-9014-012e9507c528","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81ef8bfc-3a80-4670-9014-012e9507c528\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I0115 09:45:53.163994   94931 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q8r7r" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:53.164110   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q8r7r
	I0115 09:45:53.164125   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:53.164139   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:53.164156   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:53.166652   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:53.166669   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:53.166676   94931 round_trippers.go:580]     Audit-Id: d30921c5-f854-4b42-b2e1-9dcc26c126bb
	I0115 09:45:53.166683   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:53.166691   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:53.166699   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:53.166707   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:53.166720   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:53 GMT
	I0115 09:45:53.166894   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q8r7r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"08d15645-f87b-4962-ac37-afaa15661146","resourceVersion":"418","creationTimestamp":"2024-01-15T09:45:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81ef8bfc-3a80-4670-9014-012e9507c528","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81ef8bfc-3a80-4670-9014-012e9507c528\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0115 09:45:53.167364   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:53.167386   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:53.167393   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:53.167400   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:53.169214   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:53.169235   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:53.169248   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:53.169260   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:53 GMT
	I0115 09:45:53.169272   94931 round_trippers.go:580]     Audit-Id: 2e130d4c-22c8-4ad7-8d82-8220e939bde8
	I0115 09:45:53.169282   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:53.169292   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:53.169303   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:53.169404   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:45:53.665030   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q8r7r
	I0115 09:45:53.665053   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:53.665061   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:53.665067   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:53.667351   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:53.667373   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:53.667380   94931 round_trippers.go:580]     Audit-Id: 757ad7a9-5273-4e33-9446-2528ba715766
	I0115 09:45:53.667385   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:53.667390   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:53.667398   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:53.667404   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:53.667409   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:53 GMT
	I0115 09:45:53.667524   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q8r7r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"08d15645-f87b-4962-ac37-afaa15661146","resourceVersion":"418","creationTimestamp":"2024-01-15T09:45:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81ef8bfc-3a80-4670-9014-012e9507c528","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81ef8bfc-3a80-4670-9014-012e9507c528\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0115 09:45:53.667977   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:53.667991   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:53.667998   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:53.668004   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:53.669874   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:53.669890   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:53.669900   94931 round_trippers.go:580]     Audit-Id: b1c1618e-4169-4fc0-b627-5e7d8be731d1
	I0115 09:45:53.669909   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:53.669919   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:53.669928   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:53.669940   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:53.669949   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:53 GMT
	I0115 09:45:53.670089   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:45:54.164637   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q8r7r
	I0115 09:45:54.164661   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.164669   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.164675   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.167019   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:54.167038   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.167045   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.167050   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.167055   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.167060   94931 round_trippers.go:580]     Audit-Id: 05bc6d2b-296d-4503-8c65-b541a5d04b3b
	I0115 09:45:54.167066   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.167071   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.167256   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q8r7r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"08d15645-f87b-4962-ac37-afaa15661146","resourceVersion":"428","creationTimestamp":"2024-01-15T09:45:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81ef8bfc-3a80-4670-9014-012e9507c528","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81ef8bfc-3a80-4670-9014-012e9507c528\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0115 09:45:54.167701   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:54.167713   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.167720   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.167726   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.169545   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:54.169569   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.169578   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.169584   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.169590   94931 round_trippers.go:580]     Audit-Id: 9c42427f-e939-4615-a8a7-4b20cf60d85d
	I0115 09:45:54.169595   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.169603   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.169608   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.169768   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:45:54.170131   94931 pod_ready.go:92] pod "coredns-5dd5756b68-q8r7r" in "kube-system" namespace has status "Ready":"True"
	I0115 09:45:54.170147   94931 pod_ready.go:81] duration metric: took 1.006099996s waiting for pod "coredns-5dd5756b68-q8r7r" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.170157   94931 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.170214   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-218062
	I0115 09:45:54.170223   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.170230   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.170236   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.172097   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:54.172114   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.172123   94931 round_trippers.go:580]     Audit-Id: ab3c7bb1-b9d4-4d76-9ae7-7b4f446ee076
	I0115 09:45:54.172133   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.172141   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.172149   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.172161   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.172174   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.172271   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-218062","namespace":"kube-system","uid":"c2e637f2-99f6-4803-be29-1bf3bc7b1c47","resourceVersion":"316","creationTimestamp":"2024-01-15T09:45:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"30e59e7ab1ab931a77b3e9e53c2d0ba9","kubernetes.io/config.mirror":"30e59e7ab1ab931a77b3e9e53c2d0ba9","kubernetes.io/config.seen":"2024-01-15T09:45:37.928676186Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0115 09:45:54.172669   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:54.172683   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.172690   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.172697   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.174285   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:54.174301   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.174307   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.174312   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.174317   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.174323   94931 round_trippers.go:580]     Audit-Id: aa661861-c8d7-428a-8bdc-798b81f2e585
	I0115 09:45:54.174328   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.174338   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.174464   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:45:54.174842   94931 pod_ready.go:92] pod "etcd-multinode-218062" in "kube-system" namespace has status "Ready":"True"
	I0115 09:45:54.174862   94931 pod_ready.go:81] duration metric: took 4.694588ms waiting for pod "etcd-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.174877   94931 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.174937   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-218062
	I0115 09:45:54.174948   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.174959   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.174971   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.176606   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:54.176623   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.176629   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.176634   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.176640   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.176645   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.176650   94931 round_trippers.go:580]     Audit-Id: d2a3007e-5bb5-4714-acdc-c148c4c7d80e
	I0115 09:45:54.176655   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.176795   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-218062","namespace":"kube-system","uid":"612565a1-03c7-4efa-a8d5-e70019357d3b","resourceVersion":"322","creationTimestamp":"2024-01-15T09:45:38Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"867601f8a47792c2c7318b76d01280c1","kubernetes.io/config.mirror":"867601f8a47792c2c7318b76d01280c1","kubernetes.io/config.seen":"2024-01-15T09:45:37.928667412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0115 09:45:54.177256   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:54.177270   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.177276   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.177286   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.178896   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:54.178912   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.178918   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.178923   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.178928   94931 round_trippers.go:580]     Audit-Id: 2d9bdcd8-4de7-4903-9c2a-cf8c8157a1a3
	I0115 09:45:54.178934   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.178942   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.178947   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.179070   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:45:54.179361   94931 pod_ready.go:92] pod "kube-apiserver-multinode-218062" in "kube-system" namespace has status "Ready":"True"
	I0115 09:45:54.179375   94931 pod_ready.go:81] duration metric: took 4.488379ms waiting for pod "kube-apiserver-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.179384   94931 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.179451   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-218062
	I0115 09:45:54.179462   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.179477   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.179490   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.181050   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:54.181068   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.181077   94931 round_trippers.go:580]     Audit-Id: 124e3756-55a5-4286-95f6-a0ce5e8e7493
	I0115 09:45:54.181085   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.181091   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.181121   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.181131   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.181146   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.181247   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-218062","namespace":"kube-system","uid":"cf87fc09-c319-419d-9411-5d12e72566dc","resourceVersion":"312","creationTimestamp":"2024-01-15T09:45:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9b5db148a2783c339cfc32ec0cce5f01","kubernetes.io/config.mirror":"9b5db148a2783c339cfc32ec0cce5f01","kubernetes.io/config.seen":"2024-01-15T09:45:37.928673385Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0115 09:45:54.181593   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:54.181604   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.181611   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.181617   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.183320   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:54.183338   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.183348   94931 round_trippers.go:580]     Audit-Id: 570b49fe-bffd-4f5f-a7c2-976812d242c5
	I0115 09:45:54.183357   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.183365   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.183373   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.183381   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.183390   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.183502   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:45:54.183782   94931 pod_ready.go:92] pod "kube-controller-manager-multinode-218062" in "kube-system" namespace has status "Ready":"True"
	I0115 09:45:54.183798   94931 pod_ready.go:81] duration metric: took 4.401181ms waiting for pod "kube-controller-manager-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.183811   94931 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c5s76" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.183867   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5s76
	I0115 09:45:54.183877   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.183886   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.183897   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.185572   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:45:54.185593   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.185617   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.185629   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.185642   94931 round_trippers.go:580]     Audit-Id: 6058906b-96f3-4e11-895d-c374c3fa58e4
	I0115 09:45:54.185649   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.185664   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.185670   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.185765   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c5s76","generateName":"kube-proxy-","namespace":"kube-system","uid":"d48e516d-6a91-4892-848f-b6318fb21880","resourceVersion":"408","creationTimestamp":"2024-01-15T09:45:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"42c9091f-b236-4566-a092-2569351741c0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42c9091f-b236-4566-a092-2569351741c0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0115 09:45:54.352386   94931 request.go:629] Waited for 166.264175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:54.352478   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:54.352487   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.352499   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.352511   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.354845   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:54.354865   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.354872   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.354877   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.354882   94931 round_trippers.go:580]     Audit-Id: 2e0d6036-5e83-4568-ad58-2ec1f766612d
	I0115 09:45:54.354888   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.354893   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.354899   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.355051   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:45:54.355368   94931 pod_ready.go:92] pod "kube-proxy-c5s76" in "kube-system" namespace has status "Ready":"True"
	I0115 09:45:54.355385   94931 pod_ready.go:81] duration metric: took 171.567285ms waiting for pod "kube-proxy-c5s76" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.355395   94931 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.552837   94931 request.go:629] Waited for 197.376677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-218062
	I0115 09:45:54.552910   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-218062
	I0115 09:45:54.552922   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.552930   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.552939   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.555261   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:54.555286   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.555294   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.555299   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.555305   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.555312   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.555317   94931 round_trippers.go:580]     Audit-Id: 378daeed-50c3-4c05-8f98-d878cc52fe82
	I0115 09:45:54.555322   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.555447   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-218062","namespace":"kube-system","uid":"9673d427-a1d0-4df8-bb2a-16d180ba0873","resourceVersion":"320","creationTimestamp":"2024-01-15T09:45:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4fdbcfe7d9399dde072e519487ea43b9","kubernetes.io/config.mirror":"4fdbcfe7d9399dde072e519487ea43b9","kubernetes.io/config.seen":"2024-01-15T09:45:37.928674722Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0115 09:45:54.753200   94931 request.go:629] Waited for 197.385946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:54.753272   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:45:54.753279   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.753288   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.753296   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.755620   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:54.755644   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.755655   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.755665   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.755674   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.755683   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.755692   94931 round_trippers.go:580]     Audit-Id: f1abad3a-3efe-4478-8bad-468317eea165
	I0115 09:45:54.755705   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.755815   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:45:54.756141   94931 pod_ready.go:92] pod "kube-scheduler-multinode-218062" in "kube-system" namespace has status "Ready":"True"
	I0115 09:45:54.756158   94931 pod_ready.go:81] duration metric: took 400.757271ms waiting for pod "kube-scheduler-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:45:54.756168   94931 pod_ready.go:38] duration metric: took 1.598747475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:45:54.756185   94931 api_server.go:52] waiting for apiserver process to appear ...
	I0115 09:45:54.756236   94931 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 09:45:54.767270   94931 command_runner.go:130] > 1434
	I0115 09:45:54.767314   94931 api_server.go:72] duration metric: took 3.836357502s to wait for apiserver process to appear ...
	I0115 09:45:54.767326   94931 api_server.go:88] waiting for apiserver healthz status ...
	I0115 09:45:54.767347   94931 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0115 09:45:54.772276   94931 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0115 09:45:54.772356   94931 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0115 09:45:54.772367   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.772378   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.772392   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.773278   94931 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0115 09:45:54.773299   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.773310   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.773319   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.773331   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.773342   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.773349   94931 round_trippers.go:580]     Content-Length: 264
	I0115 09:45:54.773354   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.773360   94931 round_trippers.go:580]     Audit-Id: d43470c0-15d1-4805-ab74-f200318969b5
	I0115 09:45:54.773382   94931 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0115 09:45:54.773490   94931 api_server.go:141] control plane version: v1.28.4
	I0115 09:45:54.773512   94931 api_server.go:131] duration metric: took 6.179666ms to wait for apiserver health ...
	I0115 09:45:54.773525   94931 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 09:45:54.952949   94931 request.go:629] Waited for 179.340071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 09:45:54.953015   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 09:45:54.953021   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:54.953029   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:54.953036   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:54.956720   94931 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:45:54.956745   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:54.956756   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:54.956765   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:54.956775   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:54.956785   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:54 GMT
	I0115 09:45:54.956800   94931 round_trippers.go:580]     Audit-Id: 6478738f-e5ec-46da-b80d-7c3387d707bb
	I0115 09:45:54.956812   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:54.957362   94931 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q8r7r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"08d15645-f87b-4962-ac37-afaa15661146","resourceVersion":"428","creationTimestamp":"2024-01-15T09:45:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81ef8bfc-3a80-4670-9014-012e9507c528","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81ef8bfc-3a80-4670-9014-012e9507c528\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0115 09:45:54.959080   94931 system_pods.go:59] 8 kube-system pods found
	I0115 09:45:54.959105   94931 system_pods.go:61] "coredns-5dd5756b68-q8r7r" [08d15645-f87b-4962-ac37-afaa15661146] Running
	I0115 09:45:54.959112   94931 system_pods.go:61] "etcd-multinode-218062" [c2e637f2-99f6-4803-be29-1bf3bc7b1c47] Running
	I0115 09:45:54.959120   94931 system_pods.go:61] "kindnet-692j9" [83db8ca8-afaf-43c4-a6fe-23c3e1c596d2] Running
	I0115 09:45:54.959127   94931 system_pods.go:61] "kube-apiserver-multinode-218062" [612565a1-03c7-4efa-a8d5-e70019357d3b] Running
	I0115 09:45:54.959139   94931 system_pods.go:61] "kube-controller-manager-multinode-218062" [cf87fc09-c319-419d-9411-5d12e72566dc] Running
	I0115 09:45:54.959146   94931 system_pods.go:61] "kube-proxy-c5s76" [d48e516d-6a91-4892-848f-b6318fb21880] Running
	I0115 09:45:54.959160   94931 system_pods.go:61] "kube-scheduler-multinode-218062" [9673d427-a1d0-4df8-bb2a-16d180ba0873] Running
	I0115 09:45:54.959167   94931 system_pods.go:61] "storage-provisioner" [dc0462ba-e08f-4c5d-8502-0e201cfb2cd2] Running
	I0115 09:45:54.959176   94931 system_pods.go:74] duration metric: took 185.642166ms to wait for pod list to return data ...
	I0115 09:45:54.959190   94931 default_sa.go:34] waiting for default service account to be created ...
	I0115 09:45:55.152577   94931 request.go:629] Waited for 193.289665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0115 09:45:55.152638   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0115 09:45:55.152643   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:55.152655   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:55.152666   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:55.155023   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:55.155041   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:55.155048   94931 round_trippers.go:580]     Audit-Id: 647ccdaf-5965-4816-9fbf-b8e2a8156353
	I0115 09:45:55.155053   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:55.155059   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:55.155066   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:55.155091   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:55.155104   94931 round_trippers.go:580]     Content-Length: 261
	I0115 09:45:55.155113   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:55 GMT
	I0115 09:45:55.155139   94931 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3f7a8c44-dce5-478a-8c6d-1ff57e1855c6","resourceVersion":"339","creationTimestamp":"2024-01-15T09:45:50Z"}}]}
	I0115 09:45:55.155394   94931 default_sa.go:45] found service account: "default"
	I0115 09:45:55.155416   94931 default_sa.go:55] duration metric: took 196.21918ms for default service account to be created ...
	I0115 09:45:55.155427   94931 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 09:45:55.352459   94931 request.go:629] Waited for 196.94427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 09:45:55.352536   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 09:45:55.352543   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:55.352553   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:55.352563   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:55.355789   94931 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:45:55.355808   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:55.355814   94931 round_trippers.go:580]     Audit-Id: 4cb4bd5b-56d9-439a-9bd6-8b3a3f2df79b
	I0115 09:45:55.355819   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:55.355824   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:55.355830   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:55.355835   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:55.355842   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:55 GMT
	I0115 09:45:55.356342   94931 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q8r7r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"08d15645-f87b-4962-ac37-afaa15661146","resourceVersion":"428","creationTimestamp":"2024-01-15T09:45:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81ef8bfc-3a80-4670-9014-012e9507c528","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81ef8bfc-3a80-4670-9014-012e9507c528\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0115 09:45:55.358783   94931 system_pods.go:86] 8 kube-system pods found
	I0115 09:45:55.358811   94931 system_pods.go:89] "coredns-5dd5756b68-q8r7r" [08d15645-f87b-4962-ac37-afaa15661146] Running
	I0115 09:45:55.358822   94931 system_pods.go:89] "etcd-multinode-218062" [c2e637f2-99f6-4803-be29-1bf3bc7b1c47] Running
	I0115 09:45:55.358835   94931 system_pods.go:89] "kindnet-692j9" [83db8ca8-afaf-43c4-a6fe-23c3e1c596d2] Running
	I0115 09:45:55.358843   94931 system_pods.go:89] "kube-apiserver-multinode-218062" [612565a1-03c7-4efa-a8d5-e70019357d3b] Running
	I0115 09:45:55.358856   94931 system_pods.go:89] "kube-controller-manager-multinode-218062" [cf87fc09-c319-419d-9411-5d12e72566dc] Running
	I0115 09:45:55.358868   94931 system_pods.go:89] "kube-proxy-c5s76" [d48e516d-6a91-4892-848f-b6318fb21880] Running
	I0115 09:45:55.358879   94931 system_pods.go:89] "kube-scheduler-multinode-218062" [9673d427-a1d0-4df8-bb2a-16d180ba0873] Running
	I0115 09:45:55.358888   94931 system_pods.go:89] "storage-provisioner" [dc0462ba-e08f-4c5d-8502-0e201cfb2cd2] Running
	I0115 09:45:55.358903   94931 system_pods.go:126] duration metric: took 203.464882ms to wait for k8s-apps to be running ...
	I0115 09:45:55.358918   94931 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 09:45:55.358977   94931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:45:55.370277   94931 system_svc.go:56] duration metric: took 11.353073ms WaitForService to wait for kubelet.
	I0115 09:45:55.370296   94931 kubeadm.go:581] duration metric: took 4.439343138s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 09:45:55.370313   94931 node_conditions.go:102] verifying NodePressure condition ...
	I0115 09:45:55.552719   94931 request.go:629] Waited for 182.326826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0115 09:45:55.552791   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0115 09:45:55.552796   94931 round_trippers.go:469] Request Headers:
	I0115 09:45:55.552804   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:45:55.552810   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:45:55.555065   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:45:55.555085   94931 round_trippers.go:577] Response Headers:
	I0115 09:45:55.555092   94931 round_trippers.go:580]     Audit-Id: b4d87453-3380-46b1-9380-2de626a24ea7
	I0115 09:45:55.555106   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:45:55.555115   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:45:55.555125   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:45:55.555137   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:45:55.555146   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:45:55 GMT
	I0115 09:45:55.555279   94931 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0115 09:45:55.555768   94931 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0115 09:45:55.555792   94931 node_conditions.go:123] node cpu capacity is 8
	I0115 09:45:55.555805   94931 node_conditions.go:105] duration metric: took 185.485208ms to run NodePressure ...
	I0115 09:45:55.555821   94931 start.go:228] waiting for startup goroutines ...
	I0115 09:45:55.555831   94931 start.go:233] waiting for cluster config update ...
	I0115 09:45:55.555848   94931 start.go:242] writing updated cluster config ...
	I0115 09:45:55.558115   94931 out.go:177] 
	I0115 09:45:55.559521   94931 config.go:182] Loaded profile config "multinode-218062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:45:55.559587   94931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/config.json ...
	I0115 09:45:55.561330   94931 out.go:177] * Starting worker node multinode-218062-m02 in cluster multinode-218062
	I0115 09:45:55.563094   94931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 09:45:55.564437   94931 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 09:45:55.565675   94931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:45:55.565695   94931 cache.go:56] Caching tarball of preloaded images
	I0115 09:45:55.565763   94931 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 09:45:55.565773   94931 preload.go:174] Found /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0115 09:45:55.565870   94931 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 09:45:55.565941   94931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/config.json ...
	I0115 09:45:55.582238   94931 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 09:45:55.582265   94931 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 09:45:55.582283   94931 cache.go:194] Successfully downloaded all kic artifacts
	I0115 09:45:55.582315   94931 start.go:365] acquiring machines lock for multinode-218062-m02: {Name:mkb78a30f8b19b825228edf136840ffce2d95c61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 09:45:55.582410   94931 start.go:369] acquired machines lock for "multinode-218062-m02" in 78.217µs
	I0115 09:45:55.582432   94931 start.go:93] Provisioning new machine with config: &{Name:multinode-218062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-218062 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 09:45:55.582504   94931 start.go:125] createHost starting for "m02" (driver="docker")
	I0115 09:45:55.584724   94931 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 09:45:55.584813   94931 start.go:159] libmachine.API.Create for "multinode-218062" (driver="docker")
	I0115 09:45:55.584832   94931 client.go:168] LocalClient.Create starting
	I0115 09:45:55.584899   94931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem
	I0115 09:45:55.584928   94931 main.go:141] libmachine: Decoding PEM data...
	I0115 09:45:55.584945   94931 main.go:141] libmachine: Parsing certificate...
	I0115 09:45:55.584994   94931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem
	I0115 09:45:55.585012   94931 main.go:141] libmachine: Decoding PEM data...
	I0115 09:45:55.585025   94931 main.go:141] libmachine: Parsing certificate...
	I0115 09:45:55.585219   94931 cli_runner.go:164] Run: docker network inspect multinode-218062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 09:45:55.600928   94931 network_create.go:77] Found existing network {name:multinode-218062 subnet:0xc002744f30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0115 09:45:55.600976   94931 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-218062-m02" container
	I0115 09:45:55.601033   94931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 09:45:55.617319   94931 cli_runner.go:164] Run: docker volume create multinode-218062-m02 --label name.minikube.sigs.k8s.io=multinode-218062-m02 --label created_by.minikube.sigs.k8s.io=true
	I0115 09:45:55.634156   94931 oci.go:103] Successfully created a docker volume multinode-218062-m02
	I0115 09:45:55.634236   94931 cli_runner.go:164] Run: docker run --rm --name multinode-218062-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-218062-m02 --entrypoint /usr/bin/test -v multinode-218062-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 09:45:56.142010   94931 oci.go:107] Successfully prepared a docker volume multinode-218062-m02
	I0115 09:45:56.142043   94931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 09:45:56.142063   94931 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 09:45:56.142112   94931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-218062-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 09:46:01.185748   94931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-218062-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.043591271s)
	I0115 09:46:01.185779   94931 kic.go:203] duration metric: took 5.043715 seconds to extract preloaded images to volume
	W0115 09:46:01.185905   94931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 09:46:01.185993   94931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 09:46:01.240557   94931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-218062-m02 --name multinode-218062-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-218062-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-218062-m02 --network multinode-218062 --ip 192.168.58.3 --volume multinode-218062-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 09:46:01.524681   94931 cli_runner.go:164] Run: docker container inspect multinode-218062-m02 --format={{.State.Running}}
	I0115 09:46:01.542059   94931 cli_runner.go:164] Run: docker container inspect multinode-218062-m02 --format={{.State.Status}}
	I0115 09:46:01.559260   94931 cli_runner.go:164] Run: docker exec multinode-218062-m02 stat /var/lib/dpkg/alternatives/iptables
	I0115 09:46:01.597786   94931 oci.go:144] the created container "multinode-218062-m02" has a running status.
	I0115 09:46:01.597824   94931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062-m02/id_rsa...
	I0115 09:46:01.657387   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0115 09:46:01.657429   94931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 09:46:01.676996   94931 cli_runner.go:164] Run: docker container inspect multinode-218062-m02 --format={{.State.Status}}
	I0115 09:46:01.694475   94931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 09:46:01.694497   94931 kic_runner.go:114] Args: [docker exec --privileged multinode-218062-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 09:46:01.733499   94931 cli_runner.go:164] Run: docker container inspect multinode-218062-m02 --format={{.State.Status}}
	I0115 09:46:01.753066   94931 machine.go:88] provisioning docker machine ...
	I0115 09:46:01.753135   94931 ubuntu.go:169] provisioning hostname "multinode-218062-m02"
	I0115 09:46:01.753213   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062-m02
	I0115 09:46:01.776168   94931 main.go:141] libmachine: Using SSH client type: native
	I0115 09:46:01.776505   94931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0115 09:46:01.776519   94931 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-218062-m02 && echo "multinode-218062-m02" | sudo tee /etc/hostname
	I0115 09:46:01.777212   94931 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55286->127.0.0.1:32852: read: connection reset by peer
	I0115 09:46:04.919434   94931 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-218062-m02
	
	I0115 09:46:04.919508   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062-m02
	I0115 09:46:04.937661   94931 main.go:141] libmachine: Using SSH client type: native
	I0115 09:46:04.937985   94931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0115 09:46:04.938002   94931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-218062-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-218062-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-218062-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 09:46:05.073124   94931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 09:46:05.073157   94931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17953-3696/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-3696/.minikube}
	I0115 09:46:05.073176   94931 ubuntu.go:177] setting up certificates
	I0115 09:46:05.073188   94931 provision.go:83] configureAuth start
	I0115 09:46:05.073239   94931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-218062-m02
	I0115 09:46:05.089671   94931 provision.go:138] copyHostCerts
	I0115 09:46:05.089706   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem
	I0115 09:46:05.089733   94931 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem, removing ...
	I0115 09:46:05.089743   94931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem
	I0115 09:46:05.089805   94931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/ca.pem (1082 bytes)
	I0115 09:46:05.089877   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem
	I0115 09:46:05.089894   94931 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem, removing ...
	I0115 09:46:05.089898   94931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem
	I0115 09:46:05.089921   94931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/cert.pem (1123 bytes)
	I0115 09:46:05.089961   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem
	I0115 09:46:05.089976   94931 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem, removing ...
	I0115 09:46:05.089982   94931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem
	I0115 09:46:05.090002   94931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-3696/.minikube/key.pem (1679 bytes)
	I0115 09:46:05.090056   94931 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem org=jenkins.multinode-218062-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-218062-m02]
	I0115 09:46:05.376137   94931 provision.go:172] copyRemoteCerts
	I0115 09:46:05.376204   94931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 09:46:05.376251   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062-m02
	I0115 09:46:05.393231   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062-m02/id_rsa Username:docker}
	I0115 09:46:05.489647   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 09:46:05.489721   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 09:46:05.512876   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 09:46:05.512949   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0115 09:46:05.536002   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 09:46:05.536062   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 09:46:05.557913   94931 provision.go:86] duration metric: configureAuth took 484.713317ms
	I0115 09:46:05.557940   94931 ubuntu.go:193] setting minikube options for container-runtime
	I0115 09:46:05.558112   94931 config.go:182] Loaded profile config "multinode-218062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:46:05.558217   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062-m02
	I0115 09:46:05.574837   94931 main.go:141] libmachine: Using SSH client type: native
	I0115 09:46:05.575153   94931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0115 09:46:05.575169   94931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 09:46:05.796148   94931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 09:46:05.796172   94931 machine.go:91] provisioned docker machine in 4.04308074s
	I0115 09:46:05.796182   94931 client.go:171] LocalClient.Create took 10.211343996s
	I0115 09:46:05.796210   94931 start.go:167] duration metric: libmachine.API.Create for "multinode-218062" took 10.211394739s
	I0115 09:46:05.796220   94931 start.go:300] post-start starting for "multinode-218062-m02" (driver="docker")
	I0115 09:46:05.796232   94931 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 09:46:05.796288   94931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 09:46:05.796333   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062-m02
	I0115 09:46:05.813082   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062-m02/id_rsa Username:docker}
	I0115 09:46:05.909931   94931 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 09:46:05.913030   94931 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0115 09:46:05.913051   94931 command_runner.go:130] > NAME="Ubuntu"
	I0115 09:46:05.913058   94931 command_runner.go:130] > VERSION_ID="22.04"
	I0115 09:46:05.913067   94931 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0115 09:46:05.913072   94931 command_runner.go:130] > VERSION_CODENAME=jammy
	I0115 09:46:05.913076   94931 command_runner.go:130] > ID=ubuntu
	I0115 09:46:05.913087   94931 command_runner.go:130] > ID_LIKE=debian
	I0115 09:46:05.913091   94931 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0115 09:46:05.913121   94931 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0115 09:46:05.913131   94931 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0115 09:46:05.913142   94931 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0115 09:46:05.913151   94931 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0115 09:46:05.913201   94931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 09:46:05.913232   94931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 09:46:05.913243   94931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 09:46:05.913251   94931 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 09:46:05.913261   94931 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-3696/.minikube/addons for local assets ...
	I0115 09:46:05.913313   94931 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-3696/.minikube/files for local assets ...
	I0115 09:46:05.913377   94931 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem -> 118252.pem in /etc/ssl/certs
	I0115 09:46:05.913386   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem -> /etc/ssl/certs/118252.pem
	I0115 09:46:05.913465   94931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 09:46:05.921507   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem --> /etc/ssl/certs/118252.pem (1708 bytes)
	I0115 09:46:05.944875   94931 start.go:303] post-start completed in 148.640204ms
	I0115 09:46:05.945289   94931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-218062-m02
	I0115 09:46:05.962002   94931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/config.json ...
	I0115 09:46:05.962295   94931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 09:46:05.962345   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062-m02
	I0115 09:46:05.979403   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062-m02/id_rsa Username:docker}
	I0115 09:46:06.069479   94931 command_runner.go:130] > 24%!
	(MISSING)I0115 09:46:06.069723   94931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 09:46:06.073862   94931 command_runner.go:130] > 221G
	I0115 09:46:06.074029   94931 start.go:128] duration metric: createHost completed in 10.491513752s
	I0115 09:46:06.074049   94931 start.go:83] releasing machines lock for "multinode-218062-m02", held for 10.491628655s
	I0115 09:46:06.074114   94931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-218062-m02
	I0115 09:46:06.093401   94931 out.go:177] * Found network options:
	I0115 09:46:06.095124   94931 out.go:177]   - NO_PROXY=192.168.58.2
	W0115 09:46:06.096648   94931 proxy.go:119] fail to check proxy env: Error ip not in block
	W0115 09:46:06.096701   94931 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 09:46:06.096779   94931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 09:46:06.096837   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062-m02
	I0115 09:46:06.096850   94931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 09:46:06.096924   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062-m02
	I0115 09:46:06.114136   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062-m02/id_rsa Username:docker}
	I0115 09:46:06.116133   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062-m02/id_rsa Username:docker}
	I0115 09:46:06.338747   94931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 09:46:06.338771   94931 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0115 09:46:06.344730   94931 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0115 09:46:06.344759   94931 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0115 09:46:06.344772   94931 command_runner.go:130] > Device: b0h/176d	Inode: 552115      Links: 1
	I0115 09:46:06.344782   94931 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 09:46:06.344791   94931 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0115 09:46:06.344807   94931 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0115 09:46:06.344814   94931 command_runner.go:130] > Change: 2024-01-15 09:26:53.362832697 +0000
	I0115 09:46:06.344821   94931 command_runner.go:130] >  Birth: 2024-01-15 09:26:53.362832697 +0000
	I0115 09:46:06.344884   94931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:46:06.362240   94931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0115 09:46:06.362319   94931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 09:46:06.390903   94931 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0115 09:46:06.390935   94931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 09:46:06.390941   94931 start.go:475] detecting cgroup driver to use...
	I0115 09:46:06.390970   94931 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 09:46:06.391008   94931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 09:46:06.405437   94931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 09:46:06.416172   94931 docker.go:217] disabling cri-docker service (if available) ...
	I0115 09:46:06.416230   94931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 09:46:06.428673   94931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 09:46:06.441825   94931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 09:46:06.516673   94931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 09:46:06.598416   94931 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0115 09:46:06.598456   94931 docker.go:233] disabling docker service ...
	I0115 09:46:06.598508   94931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 09:46:06.614833   94931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 09:46:06.625082   94931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 09:46:06.698617   94931 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0115 09:46:06.698694   94931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 09:46:06.787704   94931 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0115 09:46:06.787794   94931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 09:46:06.798276   94931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 09:46:06.812723   94931 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0115 09:46:06.813655   94931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 09:46:06.813722   94931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:46:06.823267   94931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 09:46:06.823326   94931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:46:06.832628   94931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:46:06.841730   94931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 09:46:06.850852   94931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 09:46:06.859509   94931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 09:46:06.866942   94931 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0115 09:46:06.867592   94931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 09:46:06.875575   94931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 09:46:06.951048   94931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 09:46:07.046076   94931 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 09:46:07.046148   94931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 09:46:07.049633   94931 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0115 09:46:07.049660   94931 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0115 09:46:07.049671   94931 command_runner.go:130] > Device: b9h/185d	Inode: 186         Links: 1
	I0115 09:46:07.049682   94931 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 09:46:07.049692   94931 command_runner.go:130] > Access: 2024-01-15 09:46:07.030482920 +0000
	I0115 09:46:07.049707   94931 command_runner.go:130] > Modify: 2024-01-15 09:46:07.030482920 +0000
	I0115 09:46:07.049716   94931 command_runner.go:130] > Change: 2024-01-15 09:46:07.030482920 +0000
	I0115 09:46:07.049726   94931 command_runner.go:130] >  Birth: -
	I0115 09:46:07.049748   94931 start.go:543] Will wait 60s for crictl version
	I0115 09:46:07.049790   94931 ssh_runner.go:195] Run: which crictl
	I0115 09:46:07.052781   94931 command_runner.go:130] > /usr/bin/crictl
	I0115 09:46:07.052858   94931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 09:46:07.084159   94931 command_runner.go:130] > Version:  0.1.0
	I0115 09:46:07.084180   94931 command_runner.go:130] > RuntimeName:  cri-o
	I0115 09:46:07.084194   94931 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0115 09:46:07.084200   94931 command_runner.go:130] > RuntimeApiVersion:  v1
	I0115 09:46:07.086048   94931 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0115 09:46:07.086132   94931 ssh_runner.go:195] Run: crio --version
	I0115 09:46:07.119686   94931 command_runner.go:130] > crio version 1.24.6
	I0115 09:46:07.119713   94931 command_runner.go:130] > Version:          1.24.6
	I0115 09:46:07.119724   94931 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0115 09:46:07.119732   94931 command_runner.go:130] > GitTreeState:     clean
	I0115 09:46:07.119742   94931 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0115 09:46:07.119750   94931 command_runner.go:130] > GoVersion:        go1.18.2
	I0115 09:46:07.119758   94931 command_runner.go:130] > Compiler:         gc
	I0115 09:46:07.119766   94931 command_runner.go:130] > Platform:         linux/amd64
	I0115 09:46:07.119782   94931 command_runner.go:130] > Linkmode:         dynamic
	I0115 09:46:07.119794   94931 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 09:46:07.119805   94931 command_runner.go:130] > SeccompEnabled:   true
	I0115 09:46:07.119816   94931 command_runner.go:130] > AppArmorEnabled:  false
	I0115 09:46:07.121576   94931 ssh_runner.go:195] Run: crio --version
	I0115 09:46:07.154193   94931 command_runner.go:130] > crio version 1.24.6
	I0115 09:46:07.154224   94931 command_runner.go:130] > Version:          1.24.6
	I0115 09:46:07.154236   94931 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0115 09:46:07.154244   94931 command_runner.go:130] > GitTreeState:     clean
	I0115 09:46:07.154253   94931 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0115 09:46:07.154261   94931 command_runner.go:130] > GoVersion:        go1.18.2
	I0115 09:46:07.154268   94931 command_runner.go:130] > Compiler:         gc
	I0115 09:46:07.154273   94931 command_runner.go:130] > Platform:         linux/amd64
	I0115 09:46:07.154282   94931 command_runner.go:130] > Linkmode:         dynamic
	I0115 09:46:07.154299   94931 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 09:46:07.154305   94931 command_runner.go:130] > SeccompEnabled:   true
	I0115 09:46:07.154313   94931 command_runner.go:130] > AppArmorEnabled:  false
	I0115 09:46:07.157589   94931 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0115 09:46:07.158916   94931 out.go:177]   - env NO_PROXY=192.168.58.2
	I0115 09:46:07.160124   94931 cli_runner.go:164] Run: docker network inspect multinode-218062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 09:46:07.176241   94931 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0115 09:46:07.179700   94931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:46:07.189776   94931 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062 for IP: 192.168.58.3
	I0115 09:46:07.189814   94931 certs.go:190] acquiring lock for shared ca certs: {Name:mk436e7b36fef987bcfd7cb65df7b354c02b1a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:46:07.189967   94931 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key
	I0115 09:46:07.190047   94931 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key
	I0115 09:46:07.190066   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 09:46:07.190088   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 09:46:07.190107   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 09:46:07.190126   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 09:46:07.190186   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825.pem (1338 bytes)
	W0115 09:46:07.190231   94931 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825_empty.pem, impossibly tiny 0 bytes
	I0115 09:46:07.190246   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca-key.pem (1675 bytes)
	I0115 09:46:07.190282   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/ca.pem (1082 bytes)
	I0115 09:46:07.190317   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/cert.pem (1123 bytes)
	I0115 09:46:07.190356   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/home/jenkins/minikube-integration/17953-3696/.minikube/certs/key.pem (1679 bytes)
	I0115 09:46:07.190412   94931 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem (1708 bytes)
	I0115 09:46:07.190451   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825.pem -> /usr/share/ca-certificates/11825.pem
	I0115 09:46:07.190470   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem -> /usr/share/ca-certificates/118252.pem
	I0115 09:46:07.190488   94931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:46:07.190827   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 09:46:07.212481   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 09:46:07.233742   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 09:46:07.255769   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 09:46:07.277518   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/certs/11825.pem --> /usr/share/ca-certificates/11825.pem (1338 bytes)
	I0115 09:46:07.298728   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/ssl/certs/118252.pem --> /usr/share/ca-certificates/118252.pem (1708 bytes)
	I0115 09:46:07.319843   94931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 09:46:07.341465   94931 ssh_runner.go:195] Run: openssl version
	I0115 09:46:07.346476   94931 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0115 09:46:07.346560   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11825.pem && ln -fs /usr/share/ca-certificates/11825.pem /etc/ssl/certs/11825.pem"
	I0115 09:46:07.354827   94931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11825.pem
	I0115 09:46:07.358051   94931 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 15 09:33 /usr/share/ca-certificates/11825.pem
	I0115 09:46:07.358096   94931 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 09:33 /usr/share/ca-certificates/11825.pem
	I0115 09:46:07.358145   94931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11825.pem
	I0115 09:46:07.364049   94931 command_runner.go:130] > 51391683
	I0115 09:46:07.364301   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11825.pem /etc/ssl/certs/51391683.0"
	I0115 09:46:07.372621   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118252.pem && ln -fs /usr/share/ca-certificates/118252.pem /etc/ssl/certs/118252.pem"
	I0115 09:46:07.381042   94931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118252.pem
	I0115 09:46:07.384242   94931 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 15 09:33 /usr/share/ca-certificates/118252.pem
	I0115 09:46:07.384292   94931 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 09:33 /usr/share/ca-certificates/118252.pem
	I0115 09:46:07.384341   94931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118252.pem
	I0115 09:46:07.390962   94931 command_runner.go:130] > 3ec20f2e
	I0115 09:46:07.391035   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118252.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 09:46:07.399693   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 09:46:07.408192   94931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:46:07.411398   94931 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:46:07.411432   94931 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:46:07.411475   94931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 09:46:07.417740   94931 command_runner.go:130] > b5213941
	I0115 09:46:07.417961   94931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 09:46:07.426611   94931 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 09:46:07.429781   94931 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:46:07.429825   94931 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 09:46:07.429906   94931 ssh_runner.go:195] Run: crio config
	I0115 09:46:07.465196   94931 command_runner.go:130] ! time="2024-01-15 09:46:07.464739858Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0115 09:46:07.465224   94931 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0115 09:46:07.471165   94931 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0115 09:46:07.471198   94931 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0115 09:46:07.471210   94931 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0115 09:46:07.471216   94931 command_runner.go:130] > #
	I0115 09:46:07.471231   94931 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0115 09:46:07.471239   94931 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0115 09:46:07.471246   94931 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0115 09:46:07.471254   94931 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0115 09:46:07.471260   94931 command_runner.go:130] > # reload'.
	I0115 09:46:07.471268   94931 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0115 09:46:07.471277   94931 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0115 09:46:07.471284   94931 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0115 09:46:07.471291   94931 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0115 09:46:07.471295   94931 command_runner.go:130] > [crio]
	I0115 09:46:07.471304   94931 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0115 09:46:07.471310   94931 command_runner.go:130] > # containers images, in this directory.
	I0115 09:46:07.471327   94931 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0115 09:46:07.471336   94931 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0115 09:46:07.471343   94931 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0115 09:46:07.471350   94931 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0115 09:46:07.471359   94931 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0115 09:46:07.471364   94931 command_runner.go:130] > # storage_driver = "vfs"
	I0115 09:46:07.471372   94931 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0115 09:46:07.471378   94931 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0115 09:46:07.471385   94931 command_runner.go:130] > # storage_option = [
	I0115 09:46:07.471389   94931 command_runner.go:130] > # ]
	I0115 09:46:07.471398   94931 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0115 09:46:07.471404   94931 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0115 09:46:07.471412   94931 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0115 09:46:07.471417   94931 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0115 09:46:07.471426   94931 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0115 09:46:07.471435   94931 command_runner.go:130] > # always happen on a node reboot
	I0115 09:46:07.471440   94931 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0115 09:46:07.471448   94931 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0115 09:46:07.471454   94931 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0115 09:46:07.471465   94931 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0115 09:46:07.471472   94931 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0115 09:46:07.471480   94931 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0115 09:46:07.471490   94931 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0115 09:46:07.471496   94931 command_runner.go:130] > # internal_wipe = true
	I0115 09:46:07.471502   94931 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0115 09:46:07.471511   94931 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0115 09:46:07.471519   94931 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0115 09:46:07.471524   94931 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0115 09:46:07.471532   94931 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0115 09:46:07.471536   94931 command_runner.go:130] > [crio.api]
	I0115 09:46:07.471544   94931 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0115 09:46:07.471549   94931 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0115 09:46:07.471557   94931 command_runner.go:130] > # IP address on which the stream server will listen.
	I0115 09:46:07.471561   94931 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0115 09:46:07.471571   94931 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0115 09:46:07.471577   94931 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0115 09:46:07.471583   94931 command_runner.go:130] > # stream_port = "0"
	I0115 09:46:07.471594   94931 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0115 09:46:07.471600   94931 command_runner.go:130] > # stream_enable_tls = false
	I0115 09:46:07.471606   94931 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0115 09:46:07.471610   94931 command_runner.go:130] > # stream_idle_timeout = ""
	I0115 09:46:07.471616   94931 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0115 09:46:07.471622   94931 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0115 09:46:07.471629   94931 command_runner.go:130] > # minutes.
	I0115 09:46:07.471633   94931 command_runner.go:130] > # stream_tls_cert = ""
	I0115 09:46:07.471639   94931 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0115 09:46:07.471647   94931 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0115 09:46:07.471654   94931 command_runner.go:130] > # stream_tls_key = ""
	I0115 09:46:07.471661   94931 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0115 09:46:07.471670   94931 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0115 09:46:07.471675   94931 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0115 09:46:07.471683   94931 command_runner.go:130] > # stream_tls_ca = ""
	I0115 09:46:07.471690   94931 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 09:46:07.471697   94931 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0115 09:46:07.471704   94931 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 09:46:07.471709   94931 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0115 09:46:07.471730   94931 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0115 09:46:07.471740   94931 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0115 09:46:07.471744   94931 command_runner.go:130] > [crio.runtime]
	I0115 09:46:07.471752   94931 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0115 09:46:07.471760   94931 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0115 09:46:07.471765   94931 command_runner.go:130] > # "nofile=1024:2048"
	I0115 09:46:07.471773   94931 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0115 09:46:07.471778   94931 command_runner.go:130] > # default_ulimits = [
	I0115 09:46:07.471784   94931 command_runner.go:130] > # ]
	I0115 09:46:07.471790   94931 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0115 09:46:07.471794   94931 command_runner.go:130] > # no_pivot = false
	I0115 09:46:07.471799   94931 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0115 09:46:07.471809   94931 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0115 09:46:07.471816   94931 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0115 09:46:07.471822   94931 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0115 09:46:07.471831   94931 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0115 09:46:07.471838   94931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 09:46:07.471845   94931 command_runner.go:130] > # conmon = ""
	I0115 09:46:07.471849   94931 command_runner.go:130] > # Cgroup setting for conmon
	I0115 09:46:07.471858   94931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0115 09:46:07.471863   94931 command_runner.go:130] > conmon_cgroup = "pod"
	I0115 09:46:07.471871   94931 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0115 09:46:07.471876   94931 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0115 09:46:07.471885   94931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 09:46:07.471889   94931 command_runner.go:130] > # conmon_env = [
	I0115 09:46:07.471894   94931 command_runner.go:130] > # ]
	I0115 09:46:07.471900   94931 command_runner.go:130] > # Additional environment variables to set for all the
	I0115 09:46:07.471908   94931 command_runner.go:130] > # containers. These are overridden if set in the
	I0115 09:46:07.471914   94931 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0115 09:46:07.471920   94931 command_runner.go:130] > # default_env = [
	I0115 09:46:07.471923   94931 command_runner.go:130] > # ]
	I0115 09:46:07.471932   94931 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0115 09:46:07.471939   94931 command_runner.go:130] > # selinux = false
	I0115 09:46:07.471945   94931 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0115 09:46:07.471954   94931 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0115 09:46:07.471961   94931 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0115 09:46:07.471966   94931 command_runner.go:130] > # seccomp_profile = ""
	I0115 09:46:07.471973   94931 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0115 09:46:07.471979   94931 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0115 09:46:07.471986   94931 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0115 09:46:07.471990   94931 command_runner.go:130] > # which might increase security.
	I0115 09:46:07.471996   94931 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0115 09:46:07.472002   94931 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0115 09:46:07.472011   94931 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0115 09:46:07.472017   94931 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0115 09:46:07.472025   94931 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0115 09:46:07.472030   94931 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:46:07.472038   94931 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0115 09:46:07.472043   94931 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0115 09:46:07.472053   94931 command_runner.go:130] > # the cgroup blockio controller.
	I0115 09:46:07.472058   94931 command_runner.go:130] > # blockio_config_file = ""
	I0115 09:46:07.472064   94931 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0115 09:46:07.472068   94931 command_runner.go:130] > # irqbalance daemon.
	I0115 09:46:07.472075   94931 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0115 09:46:07.472082   94931 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0115 09:46:07.472090   94931 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:46:07.472094   94931 command_runner.go:130] > # rdt_config_file = ""
	I0115 09:46:07.472102   94931 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0115 09:46:07.472106   94931 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0115 09:46:07.472118   94931 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0115 09:46:07.472129   94931 command_runner.go:130] > # separate_pull_cgroup = ""
	I0115 09:46:07.472141   94931 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0115 09:46:07.472153   94931 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0115 09:46:07.472158   94931 command_runner.go:130] > # will be added.
	I0115 09:46:07.472163   94931 command_runner.go:130] > # default_capabilities = [
	I0115 09:46:07.472170   94931 command_runner.go:130] > # 	"CHOWN",
	I0115 09:46:07.472176   94931 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0115 09:46:07.472184   94931 command_runner.go:130] > # 	"FSETID",
	I0115 09:46:07.472187   94931 command_runner.go:130] > # 	"FOWNER",
	I0115 09:46:07.472191   94931 command_runner.go:130] > # 	"SETGID",
	I0115 09:46:07.472196   94931 command_runner.go:130] > # 	"SETUID",
	I0115 09:46:07.472200   94931 command_runner.go:130] > # 	"SETPCAP",
	I0115 09:46:07.472205   94931 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0115 09:46:07.472211   94931 command_runner.go:130] > # 	"KILL",
	I0115 09:46:07.472215   94931 command_runner.go:130] > # ]
	I0115 09:46:07.472226   94931 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0115 09:46:07.472234   94931 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0115 09:46:07.472240   94931 command_runner.go:130] > # add_inheritable_capabilities = true
	I0115 09:46:07.472250   94931 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0115 09:46:07.472256   94931 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 09:46:07.472263   94931 command_runner.go:130] > # default_sysctls = [
	I0115 09:46:07.472266   94931 command_runner.go:130] > # ]
	I0115 09:46:07.472271   94931 command_runner.go:130] > # List of devices on the host that a
	I0115 09:46:07.472281   94931 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0115 09:46:07.472285   94931 command_runner.go:130] > # allowed_devices = [
	I0115 09:46:07.472292   94931 command_runner.go:130] > # 	"/dev/fuse",
	I0115 09:46:07.472296   94931 command_runner.go:130] > # ]
	I0115 09:46:07.472304   94931 command_runner.go:130] > # List of additional devices. specified as
	I0115 09:46:07.472329   94931 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0115 09:46:07.472337   94931 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0115 09:46:07.472343   94931 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 09:46:07.472350   94931 command_runner.go:130] > # additional_devices = [
	I0115 09:46:07.472354   94931 command_runner.go:130] > # ]
	I0115 09:46:07.472360   94931 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0115 09:46:07.472364   94931 command_runner.go:130] > # cdi_spec_dirs = [
	I0115 09:46:07.472368   94931 command_runner.go:130] > # 	"/etc/cdi",
	I0115 09:46:07.472374   94931 command_runner.go:130] > # 	"/var/run/cdi",
	I0115 09:46:07.472378   94931 command_runner.go:130] > # ]
	I0115 09:46:07.472387   94931 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0115 09:46:07.472393   94931 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0115 09:46:07.472399   94931 command_runner.go:130] > # Defaults to false.
	I0115 09:46:07.472404   94931 command_runner.go:130] > # device_ownership_from_security_context = false
	I0115 09:46:07.472413   94931 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0115 09:46:07.472421   94931 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0115 09:46:07.472427   94931 command_runner.go:130] > # hooks_dir = [
	I0115 09:46:07.472432   94931 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0115 09:46:07.472437   94931 command_runner.go:130] > # ]
	I0115 09:46:07.472443   94931 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0115 09:46:07.472450   94931 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0115 09:46:07.472458   94931 command_runner.go:130] > # its default mounts from the following two files:
	I0115 09:46:07.472461   94931 command_runner.go:130] > #
	I0115 09:46:07.472469   94931 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0115 09:46:07.472477   94931 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0115 09:46:07.472486   94931 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0115 09:46:07.472489   94931 command_runner.go:130] > #
	I0115 09:46:07.472496   94931 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0115 09:46:07.472504   94931 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0115 09:46:07.472511   94931 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0115 09:46:07.472518   94931 command_runner.go:130] > #      only add mounts it finds in this file.
	I0115 09:46:07.472522   94931 command_runner.go:130] > #
	I0115 09:46:07.472529   94931 command_runner.go:130] > # default_mounts_file = ""
	I0115 09:46:07.472537   94931 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0115 09:46:07.472546   94931 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0115 09:46:07.472550   94931 command_runner.go:130] > # pids_limit = 0
	I0115 09:46:07.472559   94931 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0115 09:46:07.472565   94931 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0115 09:46:07.472574   94931 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0115 09:46:07.472582   94931 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0115 09:46:07.472596   94931 command_runner.go:130] > # log_size_max = -1
	I0115 09:46:07.472605   94931 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0115 09:46:07.472609   94931 command_runner.go:130] > # log_to_journald = false
	I0115 09:46:07.472616   94931 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0115 09:46:07.472623   94931 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0115 09:46:07.472629   94931 command_runner.go:130] > # Path to directory for container attach sockets.
	I0115 09:46:07.472636   94931 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0115 09:46:07.472642   94931 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0115 09:46:07.472648   94931 command_runner.go:130] > # bind_mount_prefix = ""
	I0115 09:46:07.472654   94931 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0115 09:46:07.472661   94931 command_runner.go:130] > # read_only = false
	I0115 09:46:07.472668   94931 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0115 09:46:07.472676   94931 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0115 09:46:07.472681   94931 command_runner.go:130] > # live configuration reload.
	I0115 09:46:07.472688   94931 command_runner.go:130] > # log_level = "info"
	I0115 09:46:07.472693   94931 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0115 09:46:07.472698   94931 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:46:07.472705   94931 command_runner.go:130] > # log_filter = ""
	I0115 09:46:07.472711   94931 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0115 09:46:07.472719   94931 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0115 09:46:07.472723   94931 command_runner.go:130] > # separated by comma.
	I0115 09:46:07.472729   94931 command_runner.go:130] > # uid_mappings = ""
	I0115 09:46:07.472737   94931 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0115 09:46:07.472743   94931 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0115 09:46:07.472752   94931 command_runner.go:130] > # separated by comma.
	I0115 09:46:07.472756   94931 command_runner.go:130] > # gid_mappings = ""
	I0115 09:46:07.472763   94931 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0115 09:46:07.472769   94931 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 09:46:07.472778   94931 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 09:46:07.472782   94931 command_runner.go:130] > # minimum_mappable_uid = -1
	I0115 09:46:07.472789   94931 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0115 09:46:07.472797   94931 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 09:46:07.472804   94931 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 09:46:07.472810   94931 command_runner.go:130] > # minimum_mappable_gid = -1
	I0115 09:46:07.472816   94931 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0115 09:46:07.472824   94931 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0115 09:46:07.472830   94931 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0115 09:46:07.472836   94931 command_runner.go:130] > # ctr_stop_timeout = 30
	I0115 09:46:07.472842   94931 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0115 09:46:07.472853   94931 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0115 09:46:07.472858   94931 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0115 09:46:07.472865   94931 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0115 09:46:07.472869   94931 command_runner.go:130] > # drop_infra_ctr = true
	I0115 09:46:07.472875   94931 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0115 09:46:07.472883   94931 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0115 09:46:07.472890   94931 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0115 09:46:07.472896   94931 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0115 09:46:07.472905   94931 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0115 09:46:07.472912   94931 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0115 09:46:07.472916   94931 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0115 09:46:07.472929   94931 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0115 09:46:07.472936   94931 command_runner.go:130] > # pinns_path = ""
	I0115 09:46:07.472942   94931 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0115 09:46:07.472951   94931 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0115 09:46:07.472957   94931 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0115 09:46:07.472962   94931 command_runner.go:130] > # default_runtime = "runc"
	I0115 09:46:07.472968   94931 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0115 09:46:07.472977   94931 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0115 09:46:07.472986   94931 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0115 09:46:07.472994   94931 command_runner.go:130] > # creation as a file is not desired either.
	I0115 09:46:07.473002   94931 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0115 09:46:07.473009   94931 command_runner.go:130] > # the hostname is being managed dynamically.
	I0115 09:46:07.473014   94931 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0115 09:46:07.473019   94931 command_runner.go:130] > # ]
	I0115 09:46:07.473026   94931 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0115 09:46:07.473036   94931 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0115 09:46:07.473043   94931 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0115 09:46:07.473051   94931 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0115 09:46:07.473055   94931 command_runner.go:130] > #
	I0115 09:46:07.473060   94931 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0115 09:46:07.473067   94931 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0115 09:46:07.473071   94931 command_runner.go:130] > #  runtime_type = "oci"
	I0115 09:46:07.473078   94931 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0115 09:46:07.473083   94931 command_runner.go:130] > #  privileged_without_host_devices = false
	I0115 09:46:07.473090   94931 command_runner.go:130] > #  allowed_annotations = []
	I0115 09:46:07.473113   94931 command_runner.go:130] > # Where:
	I0115 09:46:07.473126   94931 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0115 09:46:07.473135   94931 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0115 09:46:07.473144   94931 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0115 09:46:07.473150   94931 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0115 09:46:07.473157   94931 command_runner.go:130] > #   in $PATH.
	I0115 09:46:07.473163   94931 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0115 09:46:07.473171   94931 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0115 09:46:07.473178   94931 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0115 09:46:07.473185   94931 command_runner.go:130] > #   state.
	I0115 09:46:07.473192   94931 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0115 09:46:07.473200   94931 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0115 09:46:07.473206   94931 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0115 09:46:07.473214   94931 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0115 09:46:07.473221   94931 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0115 09:46:07.473228   94931 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0115 09:46:07.473235   94931 command_runner.go:130] > #   The currently recognized values are:
	I0115 09:46:07.473241   94931 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0115 09:46:07.473251   94931 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0115 09:46:07.473257   94931 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0115 09:46:07.473265   94931 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0115 09:46:07.473273   94931 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0115 09:46:07.473282   94931 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0115 09:46:07.473288   94931 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0115 09:46:07.473296   94931 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0115 09:46:07.473301   94931 command_runner.go:130] > #   should be moved to the container's cgroup
	I0115 09:46:07.473305   94931 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0115 09:46:07.473311   94931 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0115 09:46:07.473316   94931 command_runner.go:130] > runtime_type = "oci"
	I0115 09:46:07.473322   94931 command_runner.go:130] > runtime_root = "/run/runc"
	I0115 09:46:07.473326   94931 command_runner.go:130] > runtime_config_path = ""
	I0115 09:46:07.473333   94931 command_runner.go:130] > monitor_path = ""
	I0115 09:46:07.473337   94931 command_runner.go:130] > monitor_cgroup = ""
	I0115 09:46:07.473344   94931 command_runner.go:130] > monitor_exec_cgroup = ""
	I0115 09:46:07.473394   94931 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0115 09:46:07.473402   94931 command_runner.go:130] > # running containers
	I0115 09:46:07.473407   94931 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0115 09:46:07.473413   94931 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0115 09:46:07.473419   94931 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0115 09:46:07.473425   94931 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0115 09:46:07.473431   94931 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0115 09:46:07.473435   94931 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0115 09:46:07.473443   94931 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0115 09:46:07.473447   94931 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0115 09:46:07.473455   94931 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0115 09:46:07.473460   94931 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0115 09:46:07.473468   94931 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0115 09:46:07.473474   94931 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0115 09:46:07.473482   94931 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0115 09:46:07.473490   94931 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0115 09:46:07.473499   94931 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0115 09:46:07.473505   94931 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0115 09:46:07.473517   94931 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0115 09:46:07.473528   94931 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0115 09:46:07.473536   94931 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0115 09:46:07.473546   94931 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0115 09:46:07.473550   94931 command_runner.go:130] > # Example:
	I0115 09:46:07.473557   94931 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0115 09:46:07.473562   94931 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0115 09:46:07.473568   94931 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0115 09:46:07.473573   94931 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0115 09:46:07.473579   94931 command_runner.go:130] > # cpuset = 0
	I0115 09:46:07.473583   94931 command_runner.go:130] > # cpushares = "0-1"
	I0115 09:46:07.473587   94931 command_runner.go:130] > # Where:
	I0115 09:46:07.473598   94931 command_runner.go:130] > # The workload name is workload-type.
	I0115 09:46:07.473606   94931 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0115 09:46:07.473614   94931 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0115 09:46:07.473619   94931 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0115 09:46:07.473629   94931 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0115 09:46:07.473640   94931 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0115 09:46:07.473646   94931 command_runner.go:130] > # 
	I0115 09:46:07.473653   94931 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0115 09:46:07.473658   94931 command_runner.go:130] > #
	I0115 09:46:07.473664   94931 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0115 09:46:07.473672   94931 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0115 09:46:07.473678   94931 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0115 09:46:07.473687   94931 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0115 09:46:07.473693   94931 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0115 09:46:07.473699   94931 command_runner.go:130] > [crio.image]
	I0115 09:46:07.473705   94931 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0115 09:46:07.473712   94931 command_runner.go:130] > # default_transport = "docker://"
	I0115 09:46:07.473719   94931 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0115 09:46:07.473728   94931 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0115 09:46:07.473732   94931 command_runner.go:130] > # global_auth_file = ""
	I0115 09:46:07.473738   94931 command_runner.go:130] > # The image used to instantiate infra containers.
	I0115 09:46:07.473743   94931 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:46:07.473750   94931 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0115 09:46:07.473757   94931 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0115 09:46:07.473765   94931 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0115 09:46:07.473770   94931 command_runner.go:130] > # This option supports live configuration reload.
	I0115 09:46:07.473776   94931 command_runner.go:130] > # pause_image_auth_file = ""
	I0115 09:46:07.473782   94931 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0115 09:46:07.473790   94931 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0115 09:46:07.473797   94931 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0115 09:46:07.473805   94931 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0115 09:46:07.473809   94931 command_runner.go:130] > # pause_command = "/pause"
	I0115 09:46:07.473818   94931 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0115 09:46:07.473824   94931 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0115 09:46:07.473833   94931 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0115 09:46:07.473841   94931 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0115 09:46:07.473846   94931 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0115 09:46:07.473853   94931 command_runner.go:130] > # signature_policy = ""
	I0115 09:46:07.473862   94931 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0115 09:46:07.473872   94931 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0115 09:46:07.473878   94931 command_runner.go:130] > # changing them here.
	I0115 09:46:07.473883   94931 command_runner.go:130] > # insecure_registries = [
	I0115 09:46:07.473888   94931 command_runner.go:130] > # ]
	I0115 09:46:07.473895   94931 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0115 09:46:07.473902   94931 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0115 09:46:07.473906   94931 command_runner.go:130] > # image_volumes = "mkdir"
	I0115 09:46:07.473911   94931 command_runner.go:130] > # Temporary directory to use for storing big files
	I0115 09:46:07.473916   94931 command_runner.go:130] > # big_files_temporary_dir = ""
	I0115 09:46:07.473923   94931 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0115 09:46:07.473929   94931 command_runner.go:130] > # CNI plugins.
	I0115 09:46:07.473933   94931 command_runner.go:130] > [crio.network]
	I0115 09:46:07.473940   94931 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0115 09:46:07.473952   94931 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0115 09:46:07.473959   94931 command_runner.go:130] > # cni_default_network = ""
	I0115 09:46:07.473964   94931 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0115 09:46:07.473971   94931 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0115 09:46:07.473977   94931 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0115 09:46:07.473984   94931 command_runner.go:130] > # plugin_dirs = [
	I0115 09:46:07.473988   94931 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0115 09:46:07.473994   94931 command_runner.go:130] > # ]
	I0115 09:46:07.473999   94931 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0115 09:46:07.474003   94931 command_runner.go:130] > [crio.metrics]
	I0115 09:46:07.474008   94931 command_runner.go:130] > # Globally enable or disable metrics support.
	I0115 09:46:07.474013   94931 command_runner.go:130] > # enable_metrics = false
	I0115 09:46:07.474018   94931 command_runner.go:130] > # Specify enabled metrics collectors.
	I0115 09:46:07.474026   94931 command_runner.go:130] > # Per default all metrics are enabled.
	I0115 09:46:07.474032   94931 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0115 09:46:07.474040   94931 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0115 09:46:07.474046   94931 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0115 09:46:07.474052   94931 command_runner.go:130] > # metrics_collectors = [
	I0115 09:46:07.474058   94931 command_runner.go:130] > # 	"operations",
	I0115 09:46:07.474065   94931 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0115 09:46:07.474070   94931 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0115 09:46:07.474074   94931 command_runner.go:130] > # 	"operations_errors",
	I0115 09:46:07.474079   94931 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0115 09:46:07.474083   94931 command_runner.go:130] > # 	"image_pulls_by_name",
	I0115 09:46:07.474087   94931 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0115 09:46:07.474093   94931 command_runner.go:130] > # 	"image_pulls_failures",
	I0115 09:46:07.474098   94931 command_runner.go:130] > # 	"image_pulls_successes",
	I0115 09:46:07.474104   94931 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0115 09:46:07.474109   94931 command_runner.go:130] > # 	"image_layer_reuse",
	I0115 09:46:07.474115   94931 command_runner.go:130] > # 	"containers_oom_total",
	I0115 09:46:07.474120   94931 command_runner.go:130] > # 	"containers_oom",
	I0115 09:46:07.474126   94931 command_runner.go:130] > # 	"processes_defunct",
	I0115 09:46:07.474130   94931 command_runner.go:130] > # 	"operations_total",
	I0115 09:46:07.474136   94931 command_runner.go:130] > # 	"operations_latency_seconds",
	I0115 09:46:07.474141   94931 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0115 09:46:07.474146   94931 command_runner.go:130] > # 	"operations_errors_total",
	I0115 09:46:07.474153   94931 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0115 09:46:07.474160   94931 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0115 09:46:07.474164   94931 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0115 09:46:07.474171   94931 command_runner.go:130] > # 	"image_pulls_success_total",
	I0115 09:46:07.474175   94931 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0115 09:46:07.474180   94931 command_runner.go:130] > # 	"containers_oom_count_total",
	I0115 09:46:07.474186   94931 command_runner.go:130] > # ]
	I0115 09:46:07.474191   94931 command_runner.go:130] > # The port on which the metrics server will listen.
	I0115 09:46:07.474201   94931 command_runner.go:130] > # metrics_port = 9090
	I0115 09:46:07.474206   94931 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0115 09:46:07.474212   94931 command_runner.go:130] > # metrics_socket = ""
	I0115 09:46:07.474217   94931 command_runner.go:130] > # The certificate for the secure metrics server.
	I0115 09:46:07.474225   94931 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0115 09:46:07.474231   94931 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0115 09:46:07.474236   94931 command_runner.go:130] > # certificate on any modification event.
	I0115 09:46:07.474243   94931 command_runner.go:130] > # metrics_cert = ""
	I0115 09:46:07.474248   94931 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0115 09:46:07.474256   94931 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0115 09:46:07.474262   94931 command_runner.go:130] > # metrics_key = ""
	I0115 09:46:07.474270   94931 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0115 09:46:07.474274   94931 command_runner.go:130] > [crio.tracing]
	I0115 09:46:07.474281   94931 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0115 09:46:07.474286   94931 command_runner.go:130] > # enable_tracing = false
	I0115 09:46:07.474293   94931 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0115 09:46:07.474298   94931 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0115 09:46:07.474305   94931 command_runner.go:130] > # Number of samples to collect per million spans.
	I0115 09:46:07.474311   94931 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0115 09:46:07.474319   94931 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0115 09:46:07.474323   94931 command_runner.go:130] > [crio.stats]
	I0115 09:46:07.474331   94931 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0115 09:46:07.474336   94931 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0115 09:46:07.474343   94931 command_runner.go:130] > # stats_collection_period = 0
	I0115 09:46:07.474418   94931 cni.go:84] Creating CNI manager for ""
	I0115 09:46:07.474427   94931 cni.go:136] 2 nodes found, recommending kindnet
	I0115 09:46:07.474434   94931 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 09:46:07.474453   94931 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-218062 NodeName:multinode-218062-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 09:46:07.474561   94931 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-218062-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 09:46:07.474616   94931 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-218062-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-218062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 09:46:07.474665   94931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 09:46:07.482297   94931 command_runner.go:130] > kubeadm
	I0115 09:46:07.482317   94931 command_runner.go:130] > kubectl
	I0115 09:46:07.482321   94931 command_runner.go:130] > kubelet
	I0115 09:46:07.482976   94931 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 09:46:07.483032   94931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0115 09:46:07.491789   94931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0115 09:46:07.507596   94931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 09:46:07.523406   94931 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0115 09:46:07.526511   94931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 09:46:07.536276   94931 host.go:66] Checking if "multinode-218062" exists ...
	I0115 09:46:07.536554   94931 config.go:182] Loaded profile config "multinode-218062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:46:07.536576   94931 start.go:304] JoinCluster: &{Name:multinode-218062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-218062 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:46:07.536702   94931 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0115 09:46:07.536755   94931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:46:07.552790   94931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa Username:docker}
	I0115 09:46:07.699821   94931 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token to6pl5.w8hwjutao1tnzcg6 --discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 
	I0115 09:46:07.704129   94931 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 09:46:07.704172   94931 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token to6pl5.w8hwjutao1tnzcg6 --discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-218062-m02"
	I0115 09:46:07.738905   94931 command_runner.go:130] ! W0115 09:46:07.738467    1109 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0115 09:46:07.766842   94931 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0115 09:46:07.831800   94931 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 09:46:09.958289   94931 command_runner.go:130] > [preflight] Running pre-flight checks
	I0115 09:46:09.958325   94931 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0115 09:46:09.958337   94931 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1048-gcp
	I0115 09:46:09.958345   94931 command_runner.go:130] > OS: Linux
	I0115 09:46:09.958354   94931 command_runner.go:130] > CGROUPS_CPU: enabled
	I0115 09:46:09.958365   94931 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0115 09:46:09.958382   94931 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0115 09:46:09.958392   94931 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0115 09:46:09.958401   94931 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0115 09:46:09.958409   94931 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0115 09:46:09.958415   94931 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0115 09:46:09.958422   94931 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0115 09:46:09.958428   94931 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0115 09:46:09.958435   94931 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0115 09:46:09.958450   94931 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0115 09:46:09.958465   94931 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 09:46:09.958480   94931 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 09:46:09.958492   94931 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0115 09:46:09.958508   94931 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0115 09:46:09.958517   94931 command_runner.go:130] > This node has joined the cluster:
	I0115 09:46:09.958527   94931 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0115 09:46:09.958540   94931 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0115 09:46:09.958554   94931 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0115 09:46:09.958579   94931 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token to6pl5.w8hwjutao1tnzcg6 --discovery-token-ca-cert-hash sha256:d7912295337f01ac2906deb500e7500df52d877bdb5cb26be73339deab38c6d2 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-218062-m02": (2.254392382s)
	I0115 09:46:09.958606   94931 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0115 09:46:10.119455   94931 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0115 09:46:10.119551   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=multinode-218062 minikube.k8s.io/updated_at=2024_01_15T09_46_10_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 09:46:10.190613   94931 command_runner.go:130] > node/multinode-218062-m02 labeled
	I0115 09:46:10.193475   94931 start.go:306] JoinCluster complete in 2.656897585s
	I0115 09:46:10.193503   94931 cni.go:84] Creating CNI manager for ""
	I0115 09:46:10.193509   94931 cni.go:136] 2 nodes found, recommending kindnet
	I0115 09:46:10.193552   94931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 09:46:10.196959   94931 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0115 09:46:10.197072   94931 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I0115 09:46:10.197124   94931 command_runner.go:130] > Device: 37h/55d	Inode: 555949      Links: 1
	I0115 09:46:10.197136   94931 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 09:46:10.197149   94931 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0115 09:46:10.197164   94931 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0115 09:46:10.197172   94931 command_runner.go:130] > Change: 2024-01-15 09:26:53.774860876 +0000
	I0115 09:46:10.197178   94931 command_runner.go:130] >  Birth: 2024-01-15 09:26:53.750859235 +0000
	I0115 09:46:10.197233   94931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 09:46:10.197247   94931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 09:46:10.213817   94931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 09:46:10.433733   94931 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0115 09:46:10.437732   94931 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0115 09:46:10.440847   94931 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0115 09:46:10.451549   94931 command_runner.go:130] > daemonset.apps/kindnet configured
	I0115 09:46:10.455892   94931 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:46:10.456121   94931 kapi.go:59] client config for multinode-218062: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.key", CAFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:46:10.456456   94931 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 09:46:10.456471   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:10.456479   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:10.456485   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:10.458715   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:10.458737   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:10.458744   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:10.458750   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:10.458755   94931 round_trippers.go:580]     Content-Length: 291
	I0115 09:46:10.458760   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:10 GMT
	I0115 09:46:10.458765   94931 round_trippers.go:580]     Audit-Id: b6a23dbd-f650-4161-b858-0f60e1fc8813
	I0115 09:46:10.458770   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:10.458776   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:10.458799   94931 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"22ec091e-f06f-49a1-8fda-0f72e5d1c41b","resourceVersion":"432","creationTimestamp":"2024-01-15T09:45:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0115 09:46:10.458880   94931 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-218062" context rescaled to 1 replicas
	I0115 09:46:10.458906   94931 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 09:46:10.461762   94931 out.go:177] * Verifying Kubernetes components...
	I0115 09:46:10.463158   94931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:46:10.474144   94931 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:46:10.474361   94931 kapi.go:59] client config for multinode-218062: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/profiles/multinode-218062/client.key", CAFile:"/home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 09:46:10.474597   94931 node_ready.go:35] waiting up to 6m0s for node "multinode-218062-m02" to be "Ready" ...
	I0115 09:46:10.474667   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062-m02
	I0115 09:46:10.474676   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:10.474683   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:10.474689   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:10.476969   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:10.476991   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:10.477000   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:10.477009   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:10.477022   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:10.477031   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:10.477048   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:10 GMT
	I0115 09:46:10.477056   94931 round_trippers.go:580]     Audit-Id: d338ac37-dcb4-4a25-b7e7-1f85b06c0665
	I0115 09:46:10.477175   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062-m02","uid":"63116a77-4fce-448e-bd9f-189a37d68976","resourceVersion":"474","creationTimestamp":"2024-01-15T09:46:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_46_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:46:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0115 09:46:10.974848   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062-m02
	I0115 09:46:10.974879   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:10.974892   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:10.974902   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:10.977297   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:10.977322   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:10.977332   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:10.977341   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:10.977350   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:10.977359   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:10.977368   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:10 GMT
	I0115 09:46:10.977380   94931 round_trippers.go:580]     Audit-Id: d2051182-20fe-45ae-9117-5ca7d4286078
	I0115 09:46:10.977503   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062-m02","uid":"63116a77-4fce-448e-bd9f-189a37d68976","resourceVersion":"474","creationTimestamp":"2024-01-15T09:46:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_46_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:46:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0115 09:46:11.474883   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062-m02
	I0115 09:46:11.474910   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.474919   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.474925   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.477293   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:11.477314   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.477321   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.477327   94931 round_trippers.go:580]     Audit-Id: a3770c96-7cf3-4f1d-abaa-70a046f8bedf
	I0115 09:46:11.477335   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.477344   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.477353   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.477368   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.477585   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062-m02","uid":"63116a77-4fce-448e-bd9f-189a37d68976","resourceVersion":"484","creationTimestamp":"2024-01-15T09:46:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_46_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:46:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I0115 09:46:11.477914   94931 node_ready.go:49] node "multinode-218062-m02" has status "Ready":"True"
	I0115 09:46:11.477930   94931 node_ready.go:38] duration metric: took 1.003316942s waiting for node "multinode-218062-m02" to be "Ready" ...
	I0115 09:46:11.477939   94931 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:46:11.478003   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 09:46:11.478013   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.478020   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.478025   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.481364   94931 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 09:46:11.481394   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.481404   94931 round_trippers.go:580]     Audit-Id: c0a4b0f5-4612-448d-9fe7-919e7917a88a
	I0115 09:46:11.481413   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.481420   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.481428   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.481435   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.481449   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.482210   94931 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q8r7r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"08d15645-f87b-4962-ac37-afaa15661146","resourceVersion":"428","creationTimestamp":"2024-01-15T09:45:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81ef8bfc-3a80-4670-9014-012e9507c528","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81ef8bfc-3a80-4670-9014-012e9507c528\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0115 09:46:11.484249   94931 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q8r7r" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:11.484332   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q8r7r
	I0115 09:46:11.484339   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.484346   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.484354   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.486479   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:11.486502   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.486512   94931 round_trippers.go:580]     Audit-Id: 2c2fee31-01c6-4def-ac91-d8ae8ad8de16
	I0115 09:46:11.486518   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.486523   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.486529   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.486544   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.486553   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.486686   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q8r7r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"08d15645-f87b-4962-ac37-afaa15661146","resourceVersion":"428","creationTimestamp":"2024-01-15T09:45:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81ef8bfc-3a80-4670-9014-012e9507c528","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81ef8bfc-3a80-4670-9014-012e9507c528\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0115 09:46:11.487123   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:46:11.487135   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.487142   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.487151   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.489020   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:46:11.489035   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.489042   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.489049   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.489054   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.489059   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.489064   94931 round_trippers.go:580]     Audit-Id: 3403377e-b245-4bfa-9ed0-7f1b7329465f
	I0115 09:46:11.489069   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.489238   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:46:11.489567   94931 pod_ready.go:92] pod "coredns-5dd5756b68-q8r7r" in "kube-system" namespace has status "Ready":"True"
	I0115 09:46:11.489584   94931 pod_ready.go:81] duration metric: took 5.317255ms waiting for pod "coredns-5dd5756b68-q8r7r" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:11.489592   94931 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:11.489650   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-218062
	I0115 09:46:11.489660   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.489667   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.489672   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.491526   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:46:11.491544   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.491550   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.491556   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.491561   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.491567   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.491575   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.491581   94931 round_trippers.go:580]     Audit-Id: d89fa968-31ad-46d9-bdba-19c3db4da0a0
	I0115 09:46:11.491674   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-218062","namespace":"kube-system","uid":"c2e637f2-99f6-4803-be29-1bf3bc7b1c47","resourceVersion":"316","creationTimestamp":"2024-01-15T09:45:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"30e59e7ab1ab931a77b3e9e53c2d0ba9","kubernetes.io/config.mirror":"30e59e7ab1ab931a77b3e9e53c2d0ba9","kubernetes.io/config.seen":"2024-01-15T09:45:37.928676186Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0115 09:46:11.492077   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:46:11.492094   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.492101   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.492107   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.493947   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:46:11.493963   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.493969   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.493975   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.493980   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.493985   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.493990   94931 round_trippers.go:580]     Audit-Id: 98181d61-89ab-4458-9858-5d34a2c7a27e
	I0115 09:46:11.493995   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.494217   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:46:11.494489   94931 pod_ready.go:92] pod "etcd-multinode-218062" in "kube-system" namespace has status "Ready":"True"
	I0115 09:46:11.494506   94931 pod_ready.go:81] duration metric: took 4.905162ms waiting for pod "etcd-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:11.494521   94931 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:11.494584   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-218062
	I0115 09:46:11.494591   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.494597   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.494603   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.496404   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:46:11.496425   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.496435   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.496445   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.496453   94931 round_trippers.go:580]     Audit-Id: 0db6ac94-04b0-430b-8ba0-0ff6e34d26a1
	I0115 09:46:11.496462   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.496468   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.496477   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.496626   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-218062","namespace":"kube-system","uid":"612565a1-03c7-4efa-a8d5-e70019357d3b","resourceVersion":"322","creationTimestamp":"2024-01-15T09:45:38Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"867601f8a47792c2c7318b76d01280c1","kubernetes.io/config.mirror":"867601f8a47792c2c7318b76d01280c1","kubernetes.io/config.seen":"2024-01-15T09:45:37.928667412Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0115 09:46:11.497070   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:46:11.497083   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.497092   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.497127   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.498988   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:46:11.499009   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.499019   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.499028   94931 round_trippers.go:580]     Audit-Id: 5dab76f0-0806-4ff7-81e3-5c0a9073d575
	I0115 09:46:11.499038   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.499053   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.499062   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.499069   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.499176   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:46:11.499567   94931 pod_ready.go:92] pod "kube-apiserver-multinode-218062" in "kube-system" namespace has status "Ready":"True"
	I0115 09:46:11.499591   94931 pod_ready.go:81] duration metric: took 5.059446ms waiting for pod "kube-apiserver-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:11.499608   94931 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:11.499681   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-218062
	I0115 09:46:11.499694   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.499705   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.499718   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.501653   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:46:11.501674   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.501681   94931 round_trippers.go:580]     Audit-Id: 79f09d44-7629-4d8b-b1f0-e107baae2ee1
	I0115 09:46:11.501687   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.501702   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.501711   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.501717   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.501724   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.501925   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-218062","namespace":"kube-system","uid":"cf87fc09-c319-419d-9411-5d12e72566dc","resourceVersion":"312","creationTimestamp":"2024-01-15T09:45:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9b5db148a2783c339cfc32ec0cce5f01","kubernetes.io/config.mirror":"9b5db148a2783c339cfc32ec0cce5f01","kubernetes.io/config.seen":"2024-01-15T09:45:37.928673385Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0115 09:46:11.502462   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:46:11.502479   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.502491   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.502501   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.504336   94931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 09:46:11.504361   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.504372   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.504379   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.504385   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.504390   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.504399   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.504404   94931 round_trippers.go:580]     Audit-Id: a4e00d94-db4b-4cea-83c4-48493df779f0
	I0115 09:46:11.504533   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:46:11.504801   94931 pod_ready.go:92] pod "kube-controller-manager-multinode-218062" in "kube-system" namespace has status "Ready":"True"
	I0115 09:46:11.504814   94931 pod_ready.go:81] duration metric: took 5.193498ms waiting for pod "kube-controller-manager-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:11.504823   94931 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c5s76" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:11.675249   94931 request.go:629] Waited for 170.36262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5s76
	I0115 09:46:11.675340   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c5s76
	I0115 09:46:11.675349   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.675357   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.675365   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.677763   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:11.677788   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.677797   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.677803   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.677809   94931 round_trippers.go:580]     Audit-Id: aeb448dd-577f-4629-ae8e-4a3d78399075
	I0115 09:46:11.677814   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.677819   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.677824   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.677983   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c5s76","generateName":"kube-proxy-","namespace":"kube-system","uid":"d48e516d-6a91-4892-848f-b6318fb21880","resourceVersion":"408","creationTimestamp":"2024-01-15T09:45:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"42c9091f-b236-4566-a092-2569351741c0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42c9091f-b236-4566-a092-2569351741c0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0115 09:46:11.875841   94931 request.go:629] Waited for 197.404812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:46:11.875923   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:46:11.875928   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:11.875936   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:11.875943   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:11.878375   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:11.878399   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:11.878406   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:11.878412   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:11.878418   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:11.878424   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:11 GMT
	I0115 09:46:11.878433   94931 round_trippers.go:580]     Audit-Id: f39a0a52-57df-42ff-beeb-e7b0041c52b4
	I0115 09:46:11.878447   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:11.878651   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:46:11.878996   94931 pod_ready.go:92] pod "kube-proxy-c5s76" in "kube-system" namespace has status "Ready":"True"
	I0115 09:46:11.879013   94931 pod_ready.go:81] duration metric: took 374.184839ms waiting for pod "kube-proxy-c5s76" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:11.879023   94931 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mnjs9" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:12.075867   94931 request.go:629] Waited for 196.785433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mnjs9
	I0115 09:46:12.075942   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mnjs9
	I0115 09:46:12.075947   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:12.075954   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:12.075960   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:12.078430   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:12.078463   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:12.078478   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:12.078488   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:12.078500   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:12 GMT
	I0115 09:46:12.078511   94931 round_trippers.go:580]     Audit-Id: 8c09f948-bfe8-4e03-a8d2-027294ea6fda
	I0115 09:46:12.078517   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:12.078525   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:12.078666   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mnjs9","generateName":"kube-proxy-","namespace":"kube-system","uid":"bb22d5e2-ebe7-4517-b9fa-6d28fb506f6d","resourceVersion":"488","creationTimestamp":"2024-01-15T09:46:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"42c9091f-b236-4566-a092-2569351741c0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:46:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42c9091f-b236-4566-a092-2569351741c0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0115 09:46:12.275455   94931 request.go:629] Waited for 196.342274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-218062-m02
	I0115 09:46:12.275510   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062-m02
	I0115 09:46:12.275515   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:12.275523   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:12.275529   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:12.277740   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:12.277764   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:12.277771   94931 round_trippers.go:580]     Audit-Id: d8b7df88-4ef7-45e0-8cc1-2a910757d6aa
	I0115 09:46:12.277777   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:12.277783   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:12.277797   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:12.277806   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:12.277814   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:12 GMT
	I0115 09:46:12.277905   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062-m02","uid":"63116a77-4fce-448e-bd9f-189a37d68976","resourceVersion":"484","creationTimestamp":"2024-01-15T09:46:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T09_46_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:46:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I0115 09:46:12.278315   94931 pod_ready.go:92] pod "kube-proxy-mnjs9" in "kube-system" namespace has status "Ready":"True"
	I0115 09:46:12.278336   94931 pod_ready.go:81] duration metric: took 399.307682ms waiting for pod "kube-proxy-mnjs9" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:12.278351   94931 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:12.475380   94931 request.go:629] Waited for 196.950268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-218062
	I0115 09:46:12.475456   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-218062
	I0115 09:46:12.475466   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:12.475474   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:12.475483   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:12.477921   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:12.477941   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:12.477947   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:12 GMT
	I0115 09:46:12.477953   94931 round_trippers.go:580]     Audit-Id: 840bdecd-5bd5-4a49-9616-8563f781a5df
	I0115 09:46:12.477958   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:12.477968   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:12.477974   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:12.477979   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:12.478113   94931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-218062","namespace":"kube-system","uid":"9673d427-a1d0-4df8-bb2a-16d180ba0873","resourceVersion":"320","creationTimestamp":"2024-01-15T09:45:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4fdbcfe7d9399dde072e519487ea43b9","kubernetes.io/config.mirror":"4fdbcfe7d9399dde072e519487ea43b9","kubernetes.io/config.seen":"2024-01-15T09:45:37.928674722Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T09:45:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0115 09:46:12.675851   94931 request.go:629] Waited for 197.359933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:46:12.675909   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-218062
	I0115 09:46:12.675914   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:12.675922   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:12.675928   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:12.678398   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:12.678426   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:12.678438   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:12.678448   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:12.678467   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:12.678477   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:12 GMT
	I0115 09:46:12.678483   94931 round_trippers.go:580]     Audit-Id: 76751427-880f-4e6f-8e6c-49079bd3f724
	I0115 09:46:12.678490   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:12.678649   94931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T09:45:35Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0115 09:46:12.678989   94931 pod_ready.go:92] pod "kube-scheduler-multinode-218062" in "kube-system" namespace has status "Ready":"True"
	I0115 09:46:12.679006   94931 pod_ready.go:81] duration metric: took 400.647754ms waiting for pod "kube-scheduler-multinode-218062" in "kube-system" namespace to be "Ready" ...
	I0115 09:46:12.679016   94931 pod_ready.go:38] duration metric: took 1.2010686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 09:46:12.679037   94931 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 09:46:12.679095   94931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:46:12.689930   94931 system_svc.go:56] duration metric: took 10.887775ms WaitForService to wait for kubelet.
	I0115 09:46:12.689957   94931 kubeadm.go:581] duration metric: took 2.231026509s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 09:46:12.689983   94931 node_conditions.go:102] verifying NodePressure condition ...
	I0115 09:46:12.875398   94931 request.go:629] Waited for 185.34063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0115 09:46:12.875484   94931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0115 09:46:12.875496   94931 round_trippers.go:469] Request Headers:
	I0115 09:46:12.875504   94931 round_trippers.go:473]     Accept: application/json, */*
	I0115 09:46:12.875510   94931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0115 09:46:12.878065   94931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 09:46:12.878088   94931 round_trippers.go:577] Response Headers:
	I0115 09:46:12.878095   94931 round_trippers.go:580]     Audit-Id: 97d84088-01b6-454b-8c91-03d5926f6265
	I0115 09:46:12.878102   94931 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 09:46:12.878110   94931 round_trippers.go:580]     Content-Type: application/json
	I0115 09:46:12.878118   94931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c055039-7df6-4b3b-b22d-67f58347480e
	I0115 09:46:12.878126   94931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 36d23ce9-da33-40fd-971e-71fe6da7b062
	I0115 09:46:12.878136   94931 round_trippers.go:580]     Date: Mon, 15 Jan 2024 09:46:12 GMT
	I0115 09:46:12.878365   94931 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"multinode-218062","uid":"47c30bc0-4228-4816-8f0b-a2044dbd4f51","resourceVersion":"412","creationTimestamp":"2024-01-15T09:45:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-218062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-218062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T09_45_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12840 chars]
	I0115 09:46:12.879181   94931 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0115 09:46:12.879213   94931 node_conditions.go:123] node cpu capacity is 8
	I0115 09:46:12.879227   94931 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0115 09:46:12.879235   94931 node_conditions.go:123] node cpu capacity is 8
	I0115 09:46:12.879258   94931 node_conditions.go:105] duration metric: took 189.268432ms to run NodePressure ...
	I0115 09:46:12.879273   94931 start.go:228] waiting for startup goroutines ...
	I0115 09:46:12.879313   94931 start.go:242] writing updated cluster config ...
	I0115 09:46:12.879824   94931 ssh_runner.go:195] Run: rm -f paused
	I0115 09:46:12.923996   94931 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 09:46:12.927036   94931 out.go:177] * Done! kubectl is now configured to use "multinode-218062" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 15 09:45:53 multinode-218062 crio[960]: time="2024-01-15 09:45:53.077529814Z" level=info msg="Starting container: 88f180abfe899811a22bafca3bbe318327aee96e941c8e1b56287abcb77d1633" id=bee5cdf1-2e48-4215-a9bc-60689c37994d name=/runtime.v1.RuntimeService/StartContainer
	Jan 15 09:45:53 multinode-218062 crio[960]: time="2024-01-15 09:45:53.083345696Z" level=info msg="Started container" PID=2303 containerID=88f180abfe899811a22bafca3bbe318327aee96e941c8e1b56287abcb77d1633 description=kube-system/storage-provisioner/storage-provisioner id=bee5cdf1-2e48-4215-a9bc-60689c37994d name=/runtime.v1.RuntimeService/StartContainer sandboxID=968d0f5f1cbbf879bfdeaeb1e40b54acbe1a1964a999aa0e76d90d4db862f9b6
	Jan 15 09:45:53 multinode-218062 crio[960]: time="2024-01-15 09:45:53.128321181Z" level=info msg="Created container b5203e9816e604a7f3181ac300f5fd384679151934e1e5d47d80f6a4d568a79f: kube-system/coredns-5dd5756b68-q8r7r/coredns" id=ef809b06-d18d-4f59-9009-975512c4c072 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 15 09:45:53 multinode-218062 crio[960]: time="2024-01-15 09:45:53.129653328Z" level=info msg="Starting container: b5203e9816e604a7f3181ac300f5fd384679151934e1e5d47d80f6a4d568a79f" id=0dce8e09-dda3-4ed0-bf65-fc924fd3318b name=/runtime.v1.RuntimeService/StartContainer
	Jan 15 09:45:53 multinode-218062 crio[960]: time="2024-01-15 09:45:53.137021205Z" level=info msg="Started container" PID=2328 containerID=b5203e9816e604a7f3181ac300f5fd384679151934e1e5d47d80f6a4d568a79f description=kube-system/coredns-5dd5756b68-q8r7r/coredns id=0dce8e09-dda3-4ed0-bf65-fc924fd3318b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d895dc339b61005e99d2f77ee461f7e3547215a71a44a2f5ac0d2d4bf6eec2b
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.002483386Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-djgvv/POD" id=c6442521-ca6e-4e5e-9b46-ec4944846766 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.002557203Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.017758572Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-djgvv Namespace:default ID:51387e3e203e4d244a8fd5713eaa50caa67b7db09c5b0d60a1cb4e8c6e995153 UID:0bbf1c4d-bb6f-4213-a4ea-917df98db81e NetNS:/var/run/netns/0a9b6878-18e1-4738-ad81-17ae06a1f392 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.017805662Z" level=info msg="Adding pod default_busybox-5bc68d56bd-djgvv to CNI network \"kindnet\" (type=ptp)"
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.027330910Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-djgvv Namespace:default ID:51387e3e203e4d244a8fd5713eaa50caa67b7db09c5b0d60a1cb4e8c6e995153 UID:0bbf1c4d-bb6f-4213-a4ea-917df98db81e NetNS:/var/run/netns/0a9b6878-18e1-4738-ad81-17ae06a1f392 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.027483595Z" level=info msg="Checking pod default_busybox-5bc68d56bd-djgvv for CNI network kindnet (type=ptp)"
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.030966947Z" level=info msg="Ran pod sandbox 51387e3e203e4d244a8fd5713eaa50caa67b7db09c5b0d60a1cb4e8c6e995153 with infra container: default/busybox-5bc68d56bd-djgvv/POD" id=c6442521-ca6e-4e5e-9b46-ec4944846766 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.032025882Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=6abdfa2a-773c-48aa-aba5-53e432c8978f name=/runtime.v1.ImageService/ImageStatus
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.032262300Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=6abdfa2a-773c-48aa-aba5-53e432c8978f name=/runtime.v1.ImageService/ImageStatus
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.035289164Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=5b270033-24c5-4a3f-8515-02537f6ab0c0 name=/runtime.v1.ImageService/PullImage
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.036186577Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.310797787Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.786957667Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=5b270033-24c5-4a3f-8515-02537f6ab0c0 name=/runtime.v1.ImageService/PullImage
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.787922667Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=7693755f-489a-41ca-8062-143917de6cb2 name=/runtime.v1.ImageService/ImageStatus
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.788532639Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7693755f-489a-41ca-8062-143917de6cb2 name=/runtime.v1.ImageService/ImageStatus
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.789568818Z" level=info msg="Creating container: default/busybox-5bc68d56bd-djgvv/busybox" id=7d131d70-1340-4f6c-b59b-81a3c1186abe name=/runtime.v1.RuntimeService/CreateContainer
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.789676161Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.835664936Z" level=info msg="Created container 8b5f3d9209464fca360b12ba5b3582f7035ddec821c60db29cbe426f34fc6bad: default/busybox-5bc68d56bd-djgvv/busybox" id=7d131d70-1340-4f6c-b59b-81a3c1186abe name=/runtime.v1.RuntimeService/CreateContainer
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.836364552Z" level=info msg="Starting container: 8b5f3d9209464fca360b12ba5b3582f7035ddec821c60db29cbe426f34fc6bad" id=29030cc8-6e85-4dbf-a1d9-7818306583be name=/runtime.v1.RuntimeService/StartContainer
	Jan 15 09:46:14 multinode-218062 crio[960]: time="2024-01-15 09:46:14.843561319Z" level=info msg="Started container" PID=2492 containerID=8b5f3d9209464fca360b12ba5b3582f7035ddec821c60db29cbe426f34fc6bad description=default/busybox-5bc68d56bd-djgvv/busybox id=29030cc8-6e85-4dbf-a1d9-7818306583be name=/runtime.v1.RuntimeService/StartContainer sandboxID=51387e3e203e4d244a8fd5713eaa50caa67b7db09c5b0d60a1cb4e8c6e995153
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8b5f3d9209464       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago       Running             busybox                   0                   51387e3e203e4       busybox-5bc68d56bd-djgvv
	b5203e9816e60       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      25 seconds ago      Running             coredns                   0                   2d895dc339b61       coredns-5dd5756b68-q8r7r
	88f180abfe899       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      25 seconds ago      Running             storage-provisioner       0                   968d0f5f1cbbf       storage-provisioner
	382cc08954e89       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      27 seconds ago      Running             kindnet-cni               0                   bc5bc44d9c603       kindnet-692j9
	9244c5655bc76       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      27 seconds ago      Running             kube-proxy                0                   367e5cbbada20       kube-proxy-c5s76
	448ddeac675f9       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      45 seconds ago      Running             kube-scheduler            0                   ec04e94d1ab76       kube-scheduler-multinode-218062
	c574295e95812       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      45 seconds ago      Running             kube-apiserver            0                   9482664f26c82       kube-apiserver-multinode-218062
	d728d05f1c263       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   2d46c9590dc0d       etcd-multinode-218062
	489ca69ea5606       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      45 seconds ago      Running             kube-controller-manager   0                   e1f38a251291c       kube-controller-manager-multinode-218062
	
	
	==> coredns [b5203e9816e604a7f3181ac300f5fd384679151934e1e5d47d80f6a4d568a79f] <==
	[INFO] 10.244.1.2:57062 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092359s
	[INFO] 10.244.0.3:52226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009678s
	[INFO] 10.244.0.3:55305 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001883808s
	[INFO] 10.244.0.3:48799 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092584s
	[INFO] 10.244.0.3:43071 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079542s
	[INFO] 10.244.0.3:54725 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001250042s
	[INFO] 10.244.0.3:46303 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000624s
	[INFO] 10.244.0.3:56885 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066869s
	[INFO] 10.244.0.3:57657 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005754s
	[INFO] 10.244.1.2:36793 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145534s
	[INFO] 10.244.1.2:47764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095772s
	[INFO] 10.244.1.2:52483 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000052901s
	[INFO] 10.244.1.2:34490 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000474s
	[INFO] 10.244.0.3:35511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088931s
	[INFO] 10.244.0.3:49095 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084526s
	[INFO] 10.244.0.3:55544 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053684s
	[INFO] 10.244.0.3:34275 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045392s
	[INFO] 10.244.1.2:41613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013321s
	[INFO] 10.244.1.2:37837 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150978s
	[INFO] 10.244.1.2:33036 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104394s
	[INFO] 10.244.1.2:34933 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095793s
	[INFO] 10.244.0.3:51455 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117566s
	[INFO] 10.244.0.3:45669 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000068399s
	[INFO] 10.244.0.3:44289 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068261s
	[INFO] 10.244.0.3:45061 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000052946s
	
	
	==> describe nodes <==
	Name:               multinode-218062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-218062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=multinode-218062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T09_45_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 09:45:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-218062
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 09:46:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 09:45:52 +0000   Mon, 15 Jan 2024 09:45:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 09:45:52 +0000   Mon, 15 Jan 2024 09:45:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 09:45:52 +0000   Mon, 15 Jan 2024 09:45:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 09:45:52 +0000   Mon, 15 Jan 2024 09:45:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-218062
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6f98437824b4447900d708d59d67616
	  System UUID:                3483d4e6-e690-40ad-8ba5-2fe7d7dd2904
	  Boot ID:                    cfbd0cf6-9096-4b85-b302-a1df984ff6e8
	  Kernel Version:             5.15.0-1048-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-djgvv                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5dd5756b68-q8r7r                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     28s
	  kube-system                 etcd-multinode-218062                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         40s
	  kube-system                 kindnet-692j9                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      28s
	  kube-system                 kube-apiserver-multinode-218062             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 kube-controller-manager-multinode-218062    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 kube-proxy-c5s76                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-scheduler-multinode-218062             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node multinode-218062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node multinode-218062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node multinode-218062 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node multinode-218062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node multinode-218062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node multinode-218062 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node multinode-218062 event: Registered Node multinode-218062 in Controller
	  Normal  NodeReady                26s                kubelet          Node multinode-218062 status is now: NodeReady
	
	
	Name:               multinode-218062-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-218062-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=multinode-218062
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T09_46_10_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 09:46:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-218062-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 09:46:11 +0000   Mon, 15 Jan 2024 09:46:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 09:46:11 +0000   Mon, 15 Jan 2024 09:46:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 09:46:11 +0000   Mon, 15 Jan 2024 09:46:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 09:46:11 +0000   Mon, 15 Jan 2024 09:46:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-218062-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 71c733c746ec4749976133b1f2d2dd53
	  System UUID:                7a72f93d-565d-47a5-9bac-e658bc669c7c
	  Boot ID:                    cfbd0cf6-9096-4b85-b302-a1df984ff6e8
	  Kernel Version:             5.15.0-1048-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-cplh9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-g9j9g               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-mnjs9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 7s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9s (x5 over 10s)  kubelet          Node multinode-218062-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 10s)  kubelet          Node multinode-218062-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 10s)  kubelet          Node multinode-218062-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8s                node-controller  Node multinode-218062-m02 event: Registered Node multinode-218062-m02 in Controller
	  Normal  NodeReady                7s                kubelet          Node multinode-218062-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.004923] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006597] FS-Cache: N-cookie d=000000004a606ad2{9p.inode} n=000000008a9152b2
	[  +0.008754] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.308298] FS-Cache: Duplicate cookie detected
	[  +0.004676] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000004a606ad2{9p.inode} n=000000009b2895ef
	[  +0.007370] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.008060] FS-Cache: N-cookie d=000000004a606ad2{9p.inode} n=000000007fcb8ee9
	[  +0.008754] FS-Cache: N-key=[8] '0690130200000000'
	[ +24.537057] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan15 09:37] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[  +1.024160] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[  +2.015838] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[  +4.255683] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[Jan15 09:38] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[ +16.122906] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	[ +33.277743] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba a2 32 d8 98 aa be fb 7b 0f 17 d3 08 00
	
	
	==> etcd [d728d05f1c2632f7cf7b0ac46af64a5a33356c73142ec6c4cf9b7f5f5475b01b] <==
	{"level":"info","ts":"2024-01-15T09:45:33.046042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2024-01-15T09:45:33.046229Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2024-01-15T09:45:33.047082Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-15T09:45:33.047213Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-15T09:45:33.047269Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-15T09:45:33.047296Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-15T09:45:33.047268Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-15T09:45:33.235867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-15T09:45:33.235918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-15T09:45:33.235937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-15T09:45:33.235952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-15T09:45:33.23596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-15T09:45:33.235973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-15T09:45:33.235984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-15T09:45:33.236944Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T09:45:33.237721Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T09:45:33.237749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T09:45:33.237716Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-218062 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-15T09:45:33.238356Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-15T09:45:33.238428Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-15T09:45:33.238942Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T09:45:33.239045Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T09:45:33.239087Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T09:45:33.239636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-15T09:45:33.240119Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	
	
	==> kernel <==
	 09:46:18 up 28 min,  0 users,  load average: 0.69, 0.84, 0.63
	Linux multinode-218062 5.15.0-1048-gcp #56~20.04.1-Ubuntu SMP Fri Nov 24 16:52:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [382cc08954e896eb1c0c0e4baa3739c6bf7a8f48f5b152eb6a85b0f81584507a] <==
	I0115 09:45:51.726703       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0115 09:45:51.726762       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0115 09:45:51.726919       1 main.go:116] setting mtu 1500 for CNI 
	I0115 09:45:51.726939       1 main.go:146] kindnetd IP family: "ipv4"
	I0115 09:45:51.726957       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0115 09:45:52.126284       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0115 09:45:52.126312       1 main.go:227] handling current node
	I0115 09:46:02.140483       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0115 09:46:02.140506       1 main.go:227] handling current node
	I0115 09:46:12.152590       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0115 09:46:12.152617       1 main.go:227] handling current node
	I0115 09:46:12.152630       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0115 09:46:12.152636       1 main.go:250] Node multinode-218062-m02 has CIDR [10.244.1.0/24] 
	I0115 09:46:12.152812       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [c574295e958126a1062510aeae9fccd3073ee7a3c125e57dd5002bd15d86a176] <==
	I0115 09:45:35.129832       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0115 09:45:35.129862       1 cache.go:39] Caches are synced for autoregister controller
	I0115 09:45:35.132101       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0115 09:45:35.133173       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0115 09:45:35.194418       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0115 09:45:35.196538       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0115 09:45:35.196558       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0115 09:45:35.196649       1 shared_informer.go:318] Caches are synced for configmaps
	I0115 09:45:35.227262       1 controller.go:624] quota admission added evaluator for: namespaces
	I0115 09:45:35.325581       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0115 09:45:36.000491       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0115 09:45:36.004626       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0115 09:45:36.004644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0115 09:45:36.440830       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0115 09:45:36.480502       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0115 09:45:36.547966       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0115 09:45:36.553533       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0115 09:45:36.554491       1 controller.go:624] quota admission added evaluator for: endpoints
	I0115 09:45:36.558696       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0115 09:45:37.058608       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0115 09:45:37.840291       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0115 09:45:37.850305       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0115 09:45:37.858930       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0115 09:45:50.737527       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0115 09:45:50.840166       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [489ca69ea56069a4b519ea75c43d40109a2c305710cb6eb409e0a845ff428804] <==
	I0115 09:45:51.238468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="140.689µs"
	I0115 09:45:52.704066       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.773µs"
	I0115 09:45:52.720639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.579µs"
	I0115 09:45:54.065574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.423899ms"
	I0115 09:45:54.065703       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.31µs"
	I0115 09:45:55.259853       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0115 09:46:09.797746       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-218062-m02\" does not exist"
	I0115 09:46:09.803065       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-218062-m02" podCIDRs=["10.244.1.0/24"]
	I0115 09:46:09.808319       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-g9j9g"
	I0115 09:46:09.808420       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mnjs9"
	I0115 09:46:10.261481       1 event.go:307] "Event occurred" object="multinode-218062-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-218062-m02 event: Registered Node multinode-218062-m02 in Controller"
	I0115 09:46:10.261482       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-218062-m02"
	I0115 09:46:11.141854       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-218062-m02"
	I0115 09:46:13.681825       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0115 09:46:13.689342       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-cplh9"
	I0115 09:46:13.694445       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-djgvv"
	I0115 09:46:13.700242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.577348ms"
	I0115 09:46:13.713265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.961604ms"
	I0115 09:46:13.721053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.728231ms"
	I0115 09:46:13.721194       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="98.852µs"
	I0115 09:46:15.093906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.21833ms"
	I0115 09:46:15.094001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.648µs"
	I0115 09:46:15.271280       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-cplh9" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-cplh9"
	I0115 09:46:15.329267       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.290183ms"
	I0115 09:46:15.329361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="45.153µs"
	
	
	==> kube-proxy [9244c5655bc76011c5bf3783dd849cc1fd378735debf3a77840857e2e8eebe24] <==
	I0115 09:45:51.456296       1 server_others.go:69] "Using iptables proxy"
	I0115 09:45:51.468358       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0115 09:45:51.542270       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0115 09:45:51.544255       1 server_others.go:152] "Using iptables Proxier"
	I0115 09:45:51.544297       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0115 09:45:51.544309       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0115 09:45:51.544351       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 09:45:51.544730       1 server.go:846] "Version info" version="v1.28.4"
	I0115 09:45:51.544757       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 09:45:51.545625       1 config.go:315] "Starting node config controller"
	I0115 09:45:51.545655       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 09:45:51.545790       1 config.go:188] "Starting service config controller"
	I0115 09:45:51.545860       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 09:45:51.545927       1 config.go:97] "Starting endpoint slice config controller"
	I0115 09:45:51.545961       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 09:45:51.646476       1 shared_informer.go:318] Caches are synced for node config
	I0115 09:45:51.646514       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 09:45:51.646545       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [448ddeac675f98d0c029853326796e4e0c25f075e54e0b02c5c2ce92e55388c3] <==
	W0115 09:45:35.226860       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 09:45:35.227647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0115 09:45:35.226945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 09:45:35.227698       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0115 09:45:35.226991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 09:45:35.227732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 09:45:35.227039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 09:45:35.227751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0115 09:45:35.226819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0115 09:45:35.227773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0115 09:45:35.227080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 09:45:35.227793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0115 09:45:35.227105       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 09:45:35.227809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0115 09:45:36.041944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 09:45:36.041977       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 09:45:36.075732       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 09:45:36.075763       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0115 09:45:36.113268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 09:45:36.113301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0115 09:45:36.147597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 09:45:36.147641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0115 09:45:36.273441       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 09:45:36.273481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0115 09:45:36.650603       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 15 09:45:50 multinode-218062 kubelet[1582]: I0115 09:45:50.828851    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d48e516d-6a91-4892-848f-b6318fb21880-lib-modules\") pod \"kube-proxy-c5s76\" (UID: \"d48e516d-6a91-4892-848f-b6318fb21880\") " pod="kube-system/kube-proxy-c5s76"
	Jan 15 09:45:50 multinode-218062 kubelet[1582]: I0115 09:45:50.828935    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26zzc\" (UniqueName: \"kubernetes.io/projected/d48e516d-6a91-4892-848f-b6318fb21880-kube-api-access-26zzc\") pod \"kube-proxy-c5s76\" (UID: \"d48e516d-6a91-4892-848f-b6318fb21880\") " pod="kube-system/kube-proxy-c5s76"
	Jan 15 09:45:51 multinode-218062 kubelet[1582]: I0115 09:45:51.032362    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83db8ca8-afaf-43c4-a6fe-23c3e1c596d2-xtables-lock\") pod \"kindnet-692j9\" (UID: \"83db8ca8-afaf-43c4-a6fe-23c3e1c596d2\") " pod="kube-system/kindnet-692j9"
	Jan 15 09:45:51 multinode-218062 kubelet[1582]: I0115 09:45:51.032411    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83db8ca8-afaf-43c4-a6fe-23c3e1c596d2-lib-modules\") pod \"kindnet-692j9\" (UID: \"83db8ca8-afaf-43c4-a6fe-23c3e1c596d2\") " pod="kube-system/kindnet-692j9"
	Jan 15 09:45:51 multinode-218062 kubelet[1582]: I0115 09:45:51.032444    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt66l\" (UniqueName: \"kubernetes.io/projected/83db8ca8-afaf-43c4-a6fe-23c3e1c596d2-kube-api-access-nt66l\") pod \"kindnet-692j9\" (UID: \"83db8ca8-afaf-43c4-a6fe-23c3e1c596d2\") " pod="kube-system/kindnet-692j9"
	Jan 15 09:45:51 multinode-218062 kubelet[1582]: I0115 09:45:51.032488    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/83db8ca8-afaf-43c4-a6fe-23c3e1c596d2-cni-cfg\") pod \"kindnet-692j9\" (UID: \"83db8ca8-afaf-43c4-a6fe-23c3e1c596d2\") " pod="kube-system/kindnet-692j9"
	Jan 15 09:45:51 multinode-218062 kubelet[1582]: W0115 09:45:51.150727    1582 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/crio-367e5cbbada20920c9d8b3f7219a2de131db6f4a8f6796d67b55f78aa5a06cca WatchSource:0}: Error finding container 367e5cbbada20920c9d8b3f7219a2de131db6f4a8f6796d67b55f78aa5a06cca: Status 404 returned error can't find the container with id 367e5cbbada20920c9d8b3f7219a2de131db6f4a8f6796d67b55f78aa5a06cca
	Jan 15 09:45:51 multinode-218062 kubelet[1582]: W0115 09:45:51.443502    1582 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/crio-bc5bc44d9c60388417923d0bee168445715be58be24732e2a490d31ae4625165 WatchSource:0}: Error finding container bc5bc44d9c60388417923d0bee168445715be58be24732e2a490d31ae4625165: Status 404 returned error can't find the container with id bc5bc44d9c60388417923d0bee168445715be58be24732e2a490d31ae4625165
	Jan 15 09:45:52 multinode-218062 kubelet[1582]: I0115 09:45:52.040875    1582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-692j9" podStartSLOduration=2.040823551 podCreationTimestamp="2024-01-15 09:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 09:45:52.040766328 +0000 UTC m=+14.224163690" watchObservedRunningTime="2024-01-15 09:45:52.040823551 +0000 UTC m=+14.224220914"
	Jan 15 09:45:52 multinode-218062 kubelet[1582]: I0115 09:45:52.681001    1582 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 15 09:45:52 multinode-218062 kubelet[1582]: I0115 09:45:52.702295    1582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c5s76" podStartSLOduration=2.702243534 podCreationTimestamp="2024-01-15 09:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 09:45:52.050053761 +0000 UTC m=+14.233451124" watchObservedRunningTime="2024-01-15 09:45:52.702243534 +0000 UTC m=+14.885640896"
	Jan 15 09:45:52 multinode-218062 kubelet[1582]: I0115 09:45:52.702647    1582 topology_manager.go:215] "Topology Admit Handler" podUID="dc0462ba-e08f-4c5d-8502-0e201cfb2cd2" podNamespace="kube-system" podName="storage-provisioner"
	Jan 15 09:45:52 multinode-218062 kubelet[1582]: I0115 09:45:52.704064    1582 topology_manager.go:215] "Topology Admit Handler" podUID="08d15645-f87b-4962-ac37-afaa15661146" podNamespace="kube-system" podName="coredns-5dd5756b68-q8r7r"
	Jan 15 09:45:52 multinode-218062 kubelet[1582]: I0115 09:45:52.843179    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08d15645-f87b-4962-ac37-afaa15661146-config-volume\") pod \"coredns-5dd5756b68-q8r7r\" (UID: \"08d15645-f87b-4962-ac37-afaa15661146\") " pod="kube-system/coredns-5dd5756b68-q8r7r"
	Jan 15 09:45:52 multinode-218062 kubelet[1582]: I0115 09:45:52.843253    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dc0462ba-e08f-4c5d-8502-0e201cfb2cd2-tmp\") pod \"storage-provisioner\" (UID: \"dc0462ba-e08f-4c5d-8502-0e201cfb2cd2\") " pod="kube-system/storage-provisioner"
	Jan 15 09:45:52 multinode-218062 kubelet[1582]: I0115 09:45:52.843351    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bxlg\" (UniqueName: \"kubernetes.io/projected/08d15645-f87b-4962-ac37-afaa15661146-kube-api-access-2bxlg\") pod \"coredns-5dd5756b68-q8r7r\" (UID: \"08d15645-f87b-4962-ac37-afaa15661146\") " pod="kube-system/coredns-5dd5756b68-q8r7r"
	Jan 15 09:45:52 multinode-218062 kubelet[1582]: I0115 09:45:52.843409    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwmrq\" (UniqueName: \"kubernetes.io/projected/dc0462ba-e08f-4c5d-8502-0e201cfb2cd2-kube-api-access-hwmrq\") pod \"storage-provisioner\" (UID: \"dc0462ba-e08f-4c5d-8502-0e201cfb2cd2\") " pod="kube-system/storage-provisioner"
	Jan 15 09:45:53 multinode-218062 kubelet[1582]: W0115 09:45:53.026188    1582 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/crio-968d0f5f1cbbf879bfdeaeb1e40b54acbe1a1964a999aa0e76d90d4db862f9b6 WatchSource:0}: Error finding container 968d0f5f1cbbf879bfdeaeb1e40b54acbe1a1964a999aa0e76d90d4db862f9b6: Status 404 returned error can't find the container with id 968d0f5f1cbbf879bfdeaeb1e40b54acbe1a1964a999aa0e76d90d4db862f9b6
	Jan 15 09:45:53 multinode-218062 kubelet[1582]: W0115 09:45:53.041816    1582 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/crio-2d895dc339b61005e99d2f77ee461f7e3547215a71a44a2f5ac0d2d4bf6eec2b WatchSource:0}: Error finding container 2d895dc339b61005e99d2f77ee461f7e3547215a71a44a2f5ac0d2d4bf6eec2b: Status 404 returned error can't find the container with id 2d895dc339b61005e99d2f77ee461f7e3547215a71a44a2f5ac0d2d4bf6eec2b
	Jan 15 09:45:54 multinode-218062 kubelet[1582]: I0115 09:45:54.047838    1582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.047786582 podCreationTimestamp="2024-01-15 09:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 09:45:54.047662562 +0000 UTC m=+16.231059926" watchObservedRunningTime="2024-01-15 09:45:54.047786582 +0000 UTC m=+16.231183944"
	Jan 15 09:45:54 multinode-218062 kubelet[1582]: I0115 09:45:54.058317    1582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q8r7r" podStartSLOduration=4.058269263 podCreationTimestamp="2024-01-15 09:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 09:45:54.058009732 +0000 UTC m=+16.241407106" watchObservedRunningTime="2024-01-15 09:45:54.058269263 +0000 UTC m=+16.241666625"
	Jan 15 09:46:13 multinode-218062 kubelet[1582]: I0115 09:46:13.700578    1582 topology_manager.go:215] "Topology Admit Handler" podUID="0bbf1c4d-bb6f-4213-a4ea-917df98db81e" podNamespace="default" podName="busybox-5bc68d56bd-djgvv"
	Jan 15 09:46:13 multinode-218062 kubelet[1582]: I0115 09:46:13.858348    1582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jplhv\" (UniqueName: \"kubernetes.io/projected/0bbf1c4d-bb6f-4213-a4ea-917df98db81e-kube-api-access-jplhv\") pod \"busybox-5bc68d56bd-djgvv\" (UID: \"0bbf1c4d-bb6f-4213-a4ea-917df98db81e\") " pod="default/busybox-5bc68d56bd-djgvv"
	Jan 15 09:46:14 multinode-218062 kubelet[1582]: W0115 09:46:14.029077    1582 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/crio-51387e3e203e4d244a8fd5713eaa50caa67b7db09c5b0d60a1cb4e8c6e995153 WatchSource:0}: Error finding container 51387e3e203e4d244a8fd5713eaa50caa67b7db09c5b0d60a1cb4e8c6e995153: Status 404 returned error can't find the container with id 51387e3e203e4d244a8fd5713eaa50caa67b7db09c5b0d60a1cb4e8c6e995153
	Jan 15 09:46:15 multinode-218062 kubelet[1582]: I0115 09:46:15.088955    1582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-djgvv" podStartSLOduration=1.3338658589999999 podCreationTimestamp="2024-01-15 09:46:13 +0000 UTC" firstStartedPulling="2024-01-15 09:46:14.032421291 +0000 UTC m=+36.215818636" lastFinishedPulling="2024-01-15 09:46:14.787448866 +0000 UTC m=+36.970846221" observedRunningTime="2024-01-15 09:46:15.088469671 +0000 UTC m=+37.271867033" watchObservedRunningTime="2024-01-15 09:46:15.088893444 +0000 UTC m=+37.272290809"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-218062 -n multinode-218062
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-218062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (2.90s)

                                                
                                    

Test pass (290/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 5.26
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.21
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 4.92
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 4.97
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.21
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 1.29
30 TestBinaryMirror 0.73
31 TestOffline 87
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 134.06
38 TestAddons/parallel/Registry 13.65
40 TestAddons/parallel/InspektorGadget 11.71
41 TestAddons/parallel/MetricsServer 5.67
42 TestAddons/parallel/HelmTiller 12.47
44 TestAddons/parallel/CSI 79.01
45 TestAddons/parallel/Headlamp 12.2
46 TestAddons/parallel/CloudSpanner 5.53
47 TestAddons/parallel/LocalPath 52.8
48 TestAddons/parallel/NvidiaDevicePlugin 6.49
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 12.12
54 TestCertOptions 27.81
55 TestCertExpiration 229.16
57 TestForceSystemdFlag 27.48
58 TestForceSystemdEnv 37.63
60 TestKVMDriverInstallOrUpdate 1.38
64 TestErrorSpam/setup 20.62
65 TestErrorSpam/start 0.63
66 TestErrorSpam/status 0.89
67 TestErrorSpam/pause 1.56
68 TestErrorSpam/unpause 1.53
69 TestErrorSpam/stop 1.4
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 70.78
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 27.56
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.65
81 TestFunctional/serial/CacheCmd/cache/add_local 0.77
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 32.57
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.37
92 TestFunctional/serial/LogsFileCmd 1.4
93 TestFunctional/serial/InvalidService 4.21
95 TestFunctional/parallel/ConfigCmd 0.49
96 TestFunctional/parallel/DashboardCmd 8.83
97 TestFunctional/parallel/DryRun 0.45
98 TestFunctional/parallel/InternationalLanguage 0.2
99 TestFunctional/parallel/StatusCmd 1.31
103 TestFunctional/parallel/ServiceCmdConnect 6.96
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 25
107 TestFunctional/parallel/SSHCmd 0.58
108 TestFunctional/parallel/CpCmd 2.34
109 TestFunctional/parallel/MySQL 20.11
110 TestFunctional/parallel/FileSync 0.28
111 TestFunctional/parallel/CertSync 1.68
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
119 TestFunctional/parallel/License 0.24
120 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
121 TestFunctional/parallel/Version/short 0.08
122 TestFunctional/parallel/Version/components 1.51
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.51
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.5
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
127 TestFunctional/parallel/ImageCommands/ImageBuild 7
128 TestFunctional/parallel/ImageCommands/Setup 1.15
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.4
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
131 TestFunctional/parallel/MountCmd/any-port 7.13
132 TestFunctional/parallel/ProfileCmd/profile_list 0.41
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.96
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.62
136 TestFunctional/parallel/MountCmd/specific-port 2.43
137 TestFunctional/parallel/ServiceCmd/List 0.38
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.59
140 TestFunctional/parallel/ServiceCmd/Format 0.5
141 TestFunctional/parallel/MountCmd/VerifyCleanup 2.04
142 TestFunctional/parallel/ServiceCmd/URL 0.54
144 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.27
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.99
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.62
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.38
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
158 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
159 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
160 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
161 TestFunctional/delete_addon-resizer_images 0.07
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 65.21
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 8.21
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.55
174 TestJSONOutput/start/Command 66.39
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.66
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.62
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.76
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.24
199 TestKicCustomNetwork/create_custom_network 28.97
200 TestKicCustomNetwork/use_default_bridge_network 26.75
201 TestKicExistingNetwork 24.45
202 TestKicCustomSubnet 27.78
203 TestKicStaticIP 28.26
204 TestMainNoArgs 0.06
205 TestMinikubeProfile 46.86
208 TestMountStart/serial/StartWithMountFirst 8.25
209 TestMountStart/serial/VerifyMountFirst 0.27
210 TestMountStart/serial/StartWithMountSecond 5.28
211 TestMountStart/serial/VerifyMountSecond 0.26
212 TestMountStart/serial/DeleteFirst 1.63
213 TestMountStart/serial/VerifyMountPostDelete 0.26
214 TestMountStart/serial/Stop 1.19
215 TestMountStart/serial/RestartStopped 6.99
216 TestMountStart/serial/VerifyMountPostStop 0.26
219 TestMultiNode/serial/FreshStart2Nodes 57.52
220 TestMultiNode/serial/DeployApp2Nodes 3.3
222 TestMultiNode/serial/AddNode 19.03
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.28
225 TestMultiNode/serial/CopyFile 9.39
226 TestMultiNode/serial/StopNode 2.13
227 TestMultiNode/serial/StartAfterStop 10.88
228 TestMultiNode/serial/RestartKeepsNodes 109.65
229 TestMultiNode/serial/DeleteNode 4.7
230 TestMultiNode/serial/StopMultiNode 23.7
231 TestMultiNode/serial/RestartMultiNode 73.35
232 TestMultiNode/serial/ValidateNameConflict 26.73
237 TestPreload 140.82
239 TestScheduledStopUnix 97.85
242 TestInsufficientStorage 10.32
243 TestRunningBinaryUpgrade 61.9
245 TestKubernetesUpgrade 347.37
246 TestMissingContainerUpgrade 133.98
247 TestStoppedBinaryUpgrade/Setup 0.47
249 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
250 TestNoKubernetes/serial/StartWithK8s 35.99
251 TestStoppedBinaryUpgrade/Upgrade 89.17
252 TestNoKubernetes/serial/StartWithStopK8s 11.33
253 TestNoKubernetes/serial/Start 7.68
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
255 TestNoKubernetes/serial/ProfileList 6.48
256 TestNoKubernetes/serial/Stop 1.25
257 TestNoKubernetes/serial/StartNoArgs 6.15
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
267 TestPause/serial/Start 75.17
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
276 TestNetworkPlugins/group/false 7.29
280 TestPause/serial/SecondStartNoReconfiguration 35.37
281 TestPause/serial/Pause 0.82
282 TestPause/serial/VerifyStatus 0.35
283 TestPause/serial/Unpause 0.71
284 TestPause/serial/PauseAgain 0.91
285 TestPause/serial/DeletePaused 3.23
287 TestStartStop/group/old-k8s-version/serial/FirstStart 116.12
288 TestPause/serial/VerifyDeletedResources 16.18
290 TestStartStop/group/embed-certs/serial/FirstStart 70.5
291 TestStartStop/group/embed-certs/serial/DeployApp 8.28
292 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
293 TestStartStop/group/embed-certs/serial/Stop 11.89
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
295 TestStartStop/group/embed-certs/serial/SecondStart 333.15
296 TestStartStop/group/old-k8s-version/serial/DeployApp 7.39
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.75
298 TestStartStop/group/old-k8s-version/serial/Stop 11.83
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/old-k8s-version/serial/SecondStart 444.64
302 TestStartStop/group/no-preload/serial/FirstStart 48.68
303 TestStartStop/group/no-preload/serial/DeployApp 7.25
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.9
305 TestStartStop/group/no-preload/serial/Stop 11.85
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/no-preload/serial/SecondStart 343.21
309 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 66.42
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.85
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.57
315 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
318 TestStartStop/group/embed-certs/serial/Pause 2.79
320 TestStartStop/group/newest-cni/serial/FirstStart 37.04
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
323 TestStartStop/group/newest-cni/serial/Stop 1.23
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
325 TestStartStop/group/newest-cni/serial/SecondStart 26.9
326 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
329 TestStartStop/group/newest-cni/serial/Pause 2.73
330 TestNetworkPlugins/group/auto/Start 68.89
331 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.01
332 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
334 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
335 TestStartStop/group/no-preload/serial/Pause 2.77
336 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
337 TestNetworkPlugins/group/kindnet/Start 70.79
338 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
339 TestStartStop/group/old-k8s-version/serial/Pause 3.05
340 TestNetworkPlugins/group/calico/Start 60.93
341 TestNetworkPlugins/group/auto/KubeletFlags 0.38
342 TestNetworkPlugins/group/auto/NetCatPod 11.16
343 TestNetworkPlugins/group/auto/DNS 0.13
344 TestNetworkPlugins/group/auto/Localhost 0.1
345 TestNetworkPlugins/group/auto/HairPin 0.1
346 TestNetworkPlugins/group/custom-flannel/Start 57.57
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
349 TestNetworkPlugins/group/calico/KubeletFlags 0.33
350 TestNetworkPlugins/group/calico/NetCatPod 11.21
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
352 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
353 TestNetworkPlugins/group/calico/DNS 0.13
354 TestNetworkPlugins/group/calico/Localhost 0.11
355 TestNetworkPlugins/group/calico/HairPin 0.11
356 TestNetworkPlugins/group/kindnet/DNS 0.13
357 TestNetworkPlugins/group/kindnet/Localhost 0.1
358 TestNetworkPlugins/group/kindnet/HairPin 0.11
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
361 TestNetworkPlugins/group/enable-default-cni/Start 86.76
362 TestNetworkPlugins/group/custom-flannel/DNS 0.27
363 TestNetworkPlugins/group/flannel/Start 66.45
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
366 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 17.01
367 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
368 TestNetworkPlugins/group/bridge/Start 38.34
369 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
370 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.48
371 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
372 TestNetworkPlugins/group/bridge/NetCatPod 9.17
373 TestNetworkPlugins/group/flannel/ControllerPod 6.01
374 TestNetworkPlugins/group/bridge/DNS 0.13
375 TestNetworkPlugins/group/bridge/Localhost 0.1
376 TestNetworkPlugins/group/bridge/HairPin 0.11
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
378 TestNetworkPlugins/group/flannel/NetCatPod 10.17
379 TestNetworkPlugins/group/flannel/DNS 0.15
380 TestNetworkPlugins/group/flannel/Localhost 0.11
381 TestNetworkPlugins/group/flannel/HairPin 0.12
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.19
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
x
+
TestDownloadOnly/v1.16.0/json-events (5.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-647512 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-647512 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.255907177s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (5.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-647512
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-647512: exit status 85 (74.942351ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-647512 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |          |
	|         | -p download-only-647512        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:26:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:26:32.384757   11837 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:26:32.384900   11837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:32.384911   11837 out.go:309] Setting ErrFile to fd 2...
	I0115 09:26:32.384918   11837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:32.385121   11837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	W0115 09:26:32.385247   11837 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17953-3696/.minikube/config/config.json: open /home/jenkins/minikube-integration/17953-3696/.minikube/config/config.json: no such file or directory
	I0115 09:26:32.385805   11837 out.go:303] Setting JSON to true
	I0115 09:26:32.386605   11837 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":543,"bootTime":1705310250,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:26:32.386664   11837 start.go:138] virtualization: kvm guest
	I0115 09:26:32.389480   11837 out.go:97] [download-only-647512] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:26:32.391423   11837 out.go:169] MINIKUBE_LOCATION=17953
	W0115 09:26:32.389625   11837 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball: no such file or directory
	I0115 09:26:32.389667   11837 notify.go:220] Checking for updates...
	I0115 09:26:32.394907   11837 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:26:32.396828   11837 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:26:32.398420   11837 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	I0115 09:26:32.400218   11837 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 09:26:32.403293   11837 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 09:26:32.403602   11837 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:26:32.426303   11837 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 09:26:32.426403   11837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:26:32.776748   11837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-15 09:26:32.768520609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:26:32.776858   11837 docker.go:295] overlay module found
	I0115 09:26:32.778918   11837 out.go:97] Using the docker driver based on user configuration
	I0115 09:26:32.778947   11837 start.go:298] selected driver: docker
	I0115 09:26:32.778953   11837 start.go:902] validating driver "docker" against <nil>
	I0115 09:26:32.779028   11837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:26:32.834389   11837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-15 09:26:32.826044737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:26:32.834535   11837 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:26:32.834997   11837 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0115 09:26:32.835160   11837 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 09:26:32.837219   11837 out.go:169] Using Docker driver with root privileges
	I0115 09:26:32.838656   11837 cni.go:84] Creating CNI manager for ""
	I0115 09:26:32.838676   11837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 09:26:32.838686   11837 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 09:26:32.838700   11837 start_flags.go:321] config:
	{Name:download-only-647512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-647512 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:26:32.840248   11837 out.go:97] Starting control plane node download-only-647512 in cluster download-only-647512
	I0115 09:26:32.840266   11837 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 09:26:32.841906   11837 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0115 09:26:32.841936   11837 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 09:26:32.842057   11837 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 09:26:32.856875   11837 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 09:26:32.857072   11837 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 09:26:32.857194   11837 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 09:26:32.859690   11837 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0115 09:26:32.859707   11837 cache.go:56] Caching tarball of preloaded images
	I0115 09:26:32.859818   11837 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 09:26:32.861927   11837 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0115 09:26:32.861949   11837 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:26:32.889424   11837 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0115 09:26:36.092389   11837 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 09:26:36.255359   11837 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:26:36.255478   11837 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17953-3696/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0115 09:26:37.162656   11837 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0115 09:26:37.163021   11837 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/download-only-647512/config.json ...
	I0115 09:26:37.163060   11837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/download-only-647512/config.json: {Name:mk4810e601ae1d0cd9c9dce75da45a1320307a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 09:26:37.163263   11837 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 09:26:37.163447   11837 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17953-3696/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-647512"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-647512
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-598232 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-598232 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.923579611s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-598232
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-598232: exit status 85 (71.108275ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-647512 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | -p download-only-647512        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| delete  | -p download-only-647512        | download-only-647512 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| start   | -o=json --download-only        | download-only-598232 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | -p download-only-598232        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:26:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:26:38.063128   12112 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:26:38.063407   12112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:38.063417   12112 out.go:309] Setting ErrFile to fd 2...
	I0115 09:26:38.063422   12112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:38.063683   12112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	I0115 09:26:38.064270   12112 out.go:303] Setting JSON to true
	I0115 09:26:38.065115   12112 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":548,"bootTime":1705310250,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:26:38.065181   12112 start.go:138] virtualization: kvm guest
	I0115 09:26:38.067594   12112 out.go:97] [download-only-598232] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:26:38.069454   12112 out.go:169] MINIKUBE_LOCATION=17953
	I0115 09:26:38.067746   12112 notify.go:220] Checking for updates...
	I0115 09:26:38.072761   12112 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:26:38.074437   12112 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:26:38.075959   12112 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	I0115 09:26:38.077396   12112 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 09:26:38.080344   12112 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 09:26:38.080583   12112 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:26:38.100456   12112 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 09:26:38.100534   12112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:26:38.157684   12112 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:41 SystemTime:2024-01-15 09:26:38.149538263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:26:38.157781   12112 docker.go:295] overlay module found
	I0115 09:26:38.159657   12112 out.go:97] Using the docker driver based on user configuration
	I0115 09:26:38.159679   12112 start.go:298] selected driver: docker
	I0115 09:26:38.159684   12112 start.go:902] validating driver "docker" against <nil>
	I0115 09:26:38.159758   12112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:26:38.209802   12112 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:41 SystemTime:2024-01-15 09:26:38.201799014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:26:38.209944   12112 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:26:38.210399   12112 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0115 09:26:38.210524   12112 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 09:26:38.212742   12112 out.go:169] Using Docker driver with root privileges
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-598232"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-598232
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-567794 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-567794 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.964990406s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-567794
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-567794: exit status 85 (77.21433ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-647512 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | -p download-only-647512           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| delete  | -p download-only-647512           | download-only-647512 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| start   | -o=json --download-only           | download-only-598232 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | -p download-only-598232           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| delete  | -p download-only-598232           | download-only-598232 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC | 15 Jan 24 09:26 UTC |
	| start   | -o=json --download-only           | download-only-567794 | jenkins | v1.32.0 | 15 Jan 24 09:26 UTC |                     |
	|         | -p download-only-567794           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 09:26:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 09:26:43.408829   12390 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:26:43.408943   12390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:43.408951   12390 out.go:309] Setting ErrFile to fd 2...
	I0115 09:26:43.408956   12390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:26:43.409143   12390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	I0115 09:26:43.409708   12390 out.go:303] Setting JSON to true
	I0115 09:26:43.410492   12390 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":554,"bootTime":1705310250,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:26:43.410553   12390 start.go:138] virtualization: kvm guest
	I0115 09:26:43.413464   12390 out.go:97] [download-only-567794] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:26:43.415053   12390 out.go:169] MINIKUBE_LOCATION=17953
	I0115 09:26:43.413633   12390 notify.go:220] Checking for updates...
	I0115 09:26:43.418274   12390 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:26:43.419771   12390 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:26:43.421202   12390 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	I0115 09:26:43.422626   12390 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 09:26:43.425433   12390 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 09:26:43.425681   12390 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:26:43.450113   12390 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 09:26:43.450231   12390 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:26:43.499832   12390 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 09:26:43.491586299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:26:43.499951   12390 docker.go:295] overlay module found
	I0115 09:26:43.501828   12390 out.go:97] Using the docker driver based on user configuration
	I0115 09:26:43.501854   12390 start.go:298] selected driver: docker
	I0115 09:26:43.501861   12390 start.go:902] validating driver "docker" against <nil>
	I0115 09:26:43.501945   12390 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:26:43.550559   12390 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 09:26:43.542711554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:26:43.550746   12390 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 09:26:43.551681   12390 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0115 09:26:43.552033   12390 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 09:26:43.554361   12390 out.go:169] Using Docker driver with root privileges
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-567794"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-567794
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.29s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-834376 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-834376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-834376
--- PASS: TestDownloadOnlyKic (1.29s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-488440 --alsologtostderr --binary-mirror http://127.0.0.1:43641 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-488440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-488440
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-604825 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-604825 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m24.683788619s)
helpers_test.go:175: Cleaning up "offline-crio-604825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-604825
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-604825: (2.312961848s)
--- PASS: TestOffline (87.00s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-154292
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-154292: exit status 85 (68.384205ms)

                                                
                                                
-- stdout --
	* Profile "addons-154292" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-154292"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-154292
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-154292: exit status 85 (68.523367ms)

                                                
                                                
-- stdout --
	* Profile "addons-154292" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-154292"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (134.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-154292 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-154292 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m14.064232832s)
--- PASS: TestAddons/Setup (134.06s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 16.416867ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5v2wn" [2d0e5c92-7366-42c3-8b78-10a21aa56b21] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004466602s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-49v8j" [4ed4e42b-4d38-4db1-a11f-5dc29a2b27de] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004263987s
addons_test.go:340: (dbg) Run:  kubectl --context addons-154292 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-154292 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-154292 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.83785635s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 ip
2024/01/15 09:29:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bpmb7" [a80a992c-080a-4bc8-8c54-12605f5f2620] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004307768s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-154292
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-154292: (5.704666296s)
--- PASS: TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.504957ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-7q98p" [d5d5ebfd-2bbf-4607-b6c5-2d877e1f6c24] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004781951s
addons_test.go:415: (dbg) Run:  kubectl --context addons-154292 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.67s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.47s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.943046ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-r46wb" [f28e7e4a-28ae-40b8-8387-fa7698c378cd] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005673417s
addons_test.go:473: (dbg) Run:  kubectl --context addons-154292 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-154292 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.898772182s)
addons_test.go:478: kubectl --context addons-154292 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-154292 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-154292 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.625529467s)
addons_test.go:478: kubectl --context addons-154292 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (79.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 4.976157ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-154292 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-154292 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e62e0d08-dae8-484e-b4c1-bef21ee49e0d] Pending
helpers_test.go:344: "task-pv-pod" [e62e0d08-dae8-484e-b4c1-bef21ee49e0d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e62e0d08-dae8-484e-b4c1-bef21ee49e0d] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003571039s
addons_test.go:584: (dbg) Run:  kubectl --context addons-154292 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-154292 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-154292 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-154292 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-154292 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-154292 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-154292 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [daf56438-2d4f-4f8b-bd51-2893d38a38b2] Pending
helpers_test.go:344: "task-pv-pod-restore" [daf56438-2d4f-4f8b-bd51-2893d38a38b2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [daf56438-2d4f-4f8b-bd51-2893d38a38b2] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003777402s
addons_test.go:626: (dbg) Run:  kubectl --context addons-154292 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-154292 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-154292 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-154292 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.574122228s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (79.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-154292 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-154292 --alsologtostderr -v=1: (1.142613516s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-z47n9" [b34b3a32-4123-4469-a882-6e5d3d1426b5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-z47n9" [b34b3a32-4123-4469-a882-6e5d3d1426b5] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-z47n9" [b34b3a32-4123-4469-a882-6e5d3d1426b5] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.058311558s
--- PASS: TestAddons/parallel/Headlamp (12.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-z5czd" [6018f971-ea8a-4f53-a151-89cadb12f17e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003796301s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-154292
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-154292 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-154292 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154292 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b361e679-a496-49ed-8a17-40ef03f7973b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b361e679-a496-49ed-8a17-40ef03f7973b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b361e679-a496-49ed-8a17-40ef03f7973b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00328647s
addons_test.go:891: (dbg) Run:  kubectl --context addons-154292 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 ssh "cat /opt/local-path-provisioner/pvc-f7279b53-de25-4edb-8917-9e502cb81cfd_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-154292 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-154292 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-154292 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-154292 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.939548294s)
--- PASS: TestAddons/parallel/LocalPath (52.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jbhds" [3a979418-cfab-4d46-9160-e4a887d9aea9] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005361574s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-154292
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-nsmdk" [55aba285-5db4-495d-a0aa-2c368471e08b] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003311066s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-154292 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-154292 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-154292
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-154292: (11.83300347s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-154292
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-154292
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-154292
--- PASS: TestAddons/StoppedEnableDisable (12.12s)

                                                
                                    
x
+
TestCertOptions (27.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-127185 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-127185 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.571413057s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-127185 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-127185 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-127185 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-127185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-127185
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-127185: (3.571596573s)
--- PASS: TestCertOptions (27.81s)

                                                
                                    
x
+
TestCertExpiration (229.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-263869 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0115 09:57:29.333205   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-263869 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.369366915s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-263869 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-263869 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (20.81245275s)
helpers_test.go:175: Cleaning up "cert-expiration-263869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-263869
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-263869: (1.979684769s)
--- PASS: TestCertExpiration (229.16s)

                                                
                                    
x
+
TestForceSystemdFlag (27.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-281057 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-281057 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.714107264s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-281057 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-281057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-281057
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-281057: (2.463013608s)
--- PASS: TestForceSystemdFlag (27.48s)

                                                
                                    
x
+
TestForceSystemdEnv (37.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-672880 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0115 09:56:48.908079   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-672880 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.730468932s)
helpers_test.go:175: Cleaning up "force-systemd-env-672880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-672880
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-672880: (8.896463666s)
--- PASS: TestForceSystemdEnv (37.63s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.38s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.38s)

                                                
                                    
x
+
TestErrorSpam/setup (20.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-414234 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-414234 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-414234 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-414234 --driver=docker  --container-runtime=crio: (20.61733991s)
--- PASS: TestErrorSpam/setup (20.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 stop: (1.184044872s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-414234 --log_dir /tmp/nospam-414234 stop
--- PASS: TestErrorSpam/stop (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17953-3696/.minikube/files/etc/test/nested/copy/11825/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945307 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0115 09:34:05.325306   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:05.331235   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:05.341538   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:05.361822   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:05.402103   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:05.482412   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:05.642867   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:05.963412   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:06.604295   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:07.884715   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:10.445261   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-945307 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m10.778189648s)
--- PASS: TestFunctional/serial/StartWithProxy (70.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945307 --alsologtostderr -v=8
E0115 09:34:15.566315   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
E0115 09:34:25.806494   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-945307 --alsologtostderr -v=8: (27.553324215s)
functional_test.go:659: soft start took 27.554786986s for "functional-945307" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-945307 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-945307 /tmp/TestFunctionalserialCacheCmdcacheadd_local939493023/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 cache add minikube-local-cache-test:functional-945307
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 cache delete minikube-local-cache-test:functional-945307
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-945307
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945307 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.002519ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 kubectl -- --context functional-945307 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-945307 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945307 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0115 09:34:46.287626   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-945307 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.565304155s)
functional_test.go:757: restart took 32.565448056s for "functional-945307" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-945307 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-945307 logs: (1.372459646s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 logs --file /tmp/TestFunctionalserialLogsFileCmd1071833670/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-945307 logs --file /tmp/TestFunctionalserialLogsFileCmd1071833670/001/logs.txt: (1.400956737s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-945307 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-945307
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-945307: exit status 115 (343.750858ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30624 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-945307 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945307 config get cpus: exit status 14 (87.386986ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945307 config get cpus: exit status 14 (71.922685ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-945307 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-945307 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 47620: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945307 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-945307 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (189.212023ms)

                                                
                                                
-- stdout --
	* [functional-945307] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 09:35:39.832714   46864 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:35:39.832948   46864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:35:39.832987   46864 out.go:309] Setting ErrFile to fd 2...
	I0115 09:35:39.833000   46864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:35:39.833325   46864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	I0115 09:35:39.833934   46864 out.go:303] Setting JSON to false
	I0115 09:35:39.834984   46864 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1090,"bootTime":1705310250,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:35:39.835081   46864 start.go:138] virtualization: kvm guest
	I0115 09:35:39.838026   46864 out.go:177] * [functional-945307] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:35:39.842751   46864 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:35:39.840772   46864 notify.go:220] Checking for updates...
	I0115 09:35:39.846583   46864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:35:39.848379   46864 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:35:39.850419   46864 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	I0115 09:35:39.852385   46864 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:35:39.854248   46864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:35:39.856534   46864 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:35:39.857331   46864 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:35:39.883654   46864 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 09:35:39.883832   46864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:35:39.940355   46864 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-15 09:35:39.930989693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:35:39.940469   46864 docker.go:295] overlay module found
	I0115 09:35:39.942865   46864 out.go:177] * Using the docker driver based on existing profile
	I0115 09:35:39.944389   46864 start.go:298] selected driver: docker
	I0115 09:35:39.944405   46864 start.go:902] validating driver "docker" against &{Name:functional-945307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-945307 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:35:39.944496   46864 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:35:39.946713   46864 out.go:177] 
	W0115 09:35:39.948182   46864 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0115 09:35:39.949660   46864 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945307 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945307 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-945307 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.233681ms)

                                                
                                                
-- stdout --
	* [functional-945307] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 09:35:40.284999   47119 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:35:40.285140   47119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:35:40.285152   47119 out.go:309] Setting ErrFile to fd 2...
	I0115 09:35:40.285157   47119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:35:40.285434   47119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	I0115 09:35:40.285971   47119 out.go:303] Setting JSON to false
	I0115 09:35:40.286899   47119 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1090,"bootTime":1705310250,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:35:40.286975   47119 start.go:138] virtualization: kvm guest
	I0115 09:35:40.295214   47119 out.go:177] * [functional-945307] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0115 09:35:40.297136   47119 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:35:40.298962   47119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:35:40.297029   47119 notify.go:220] Checking for updates...
	I0115 09:35:40.302181   47119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:35:40.303596   47119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	I0115 09:35:40.304914   47119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:35:40.306343   47119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:35:40.308330   47119 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:35:40.309024   47119 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:35:40.341196   47119 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 09:35:40.341323   47119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:35:40.399913   47119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-15 09:35:40.390728542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:35:40.400032   47119 docker.go:295] overlay module found
	I0115 09:35:40.403205   47119 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0115 09:35:40.404629   47119 start.go:298] selected driver: docker
	I0115 09:35:40.404646   47119 start.go:902] validating driver "docker" against &{Name:functional-945307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-945307 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 09:35:40.404738   47119 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:35:40.407246   47119 out.go:177] 
	W0115 09:35:40.408938   47119 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0115 09:35:40.410612   47119 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-945307 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-945307 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-vmvc8" [be02c8c6-8a7e-457e-8079-ada9f57d4308] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-vmvc8" [be02c8c6-8a7e-457e-8079-ada9f57d4308] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.005931564s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30702
functional_test.go:1674: http://192.168.49.2:30702: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-vmvc8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30702
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2d8b50c3-0233-46fb-a642-e4bd3c7296f9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00491611s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-945307 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-945307 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-945307 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-945307 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [968c8c15-c304-41b5-bb79-26fbc6ba0d77] Pending
helpers_test.go:344: "sp-pod" [968c8c15-c304-41b5-bb79-26fbc6ba0d77] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [968c8c15-c304-41b5-bb79-26fbc6ba0d77] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004645565s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-945307 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-945307 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-945307 delete -f testdata/storage-provisioner/pod.yaml: (1.044441968s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-945307 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e0412869-78db-4c8e-b60a-5ec8195c55e0] Pending
helpers_test.go:344: "sp-pod" [e0412869-78db-4c8e-b60a-5ec8195c55e0] Running
2024/01/15 09:35:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004369365s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-945307 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh -n functional-945307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 cp functional-945307:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2198262961/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh -n functional-945307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
E0115 09:35:27.248044   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh -n functional-945307 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-945307 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-fd2h7" [2c3be6b6-0973-425c-a226-904058222ba0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-fd2h7" [2c3be6b6-0973-425c-a226-904058222ba0] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.004316721s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-945307 exec mysql-859648c796-fd2h7 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-945307 exec mysql-859648c796-fd2h7 -- mysql -ppassword -e "show databases;": exit status 1 (104.264897ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-945307 exec mysql-859648c796-fd2h7 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-945307 exec mysql-859648c796-fd2h7 -- mysql -ppassword -e "show databases;": exit status 1 (103.193913ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-945307 exec mysql-859648c796-fd2h7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.11s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/11825/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo cat /etc/test/nested/copy/11825/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/11825.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo cat /etc/ssl/certs/11825.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/11825.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo cat /usr/share/ca-certificates/11825.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/118252.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo cat /etc/ssl/certs/118252.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/118252.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo cat /usr/share/ca-certificates/118252.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-945307 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945307 ssh "sudo systemctl is-active docker": exit status 1 (317.290938ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945307 ssh "sudo systemctl is-active containerd": exit status 1 (273.679907ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-945307 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-945307 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-vpms2" [f65bf772-0c6e-4138-99de-10b729383f18] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-vpms2" [f65bf772-0c6e-4138-99de-10b729383f18] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004340405s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-amd64 -p functional-945307 version -o=json --components: (1.510979301s)
--- PASS: TestFunctional/parallel/Version/components (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945307 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-945307
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945307 image ls --format short --alsologtostderr:
I0115 09:35:52.206213   49272 out.go:296] Setting OutFile to fd 1 ...
I0115 09:35:52.206506   49272 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:35:52.206516   49272 out.go:309] Setting ErrFile to fd 2...
I0115 09:35:52.206523   49272 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:35:52.206743   49272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
I0115 09:35:52.207383   49272 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:35:52.207513   49272 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:35:52.207960   49272 cli_runner.go:164] Run: docker container inspect functional-945307 --format={{.State.Status}}
I0115 09:35:52.224792   49272 ssh_runner.go:195] Run: systemctl --version
I0115 09:35:52.224844   49272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-945307
I0115 09:35:52.254960   49272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/functional-945307/id_rsa Username:docker}
I0115 09:35:52.530288   49272 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945307 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | a8758716bb6aa | 191MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | alpine             | 529b5644c430c | 44.4MB |
| gcr.io/google-containers/addon-resizer  | functional-945307  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945307 image ls --format table --alsologtostderr:
I0115 09:35:54.705236   49664 out.go:296] Setting OutFile to fd 1 ...
I0115 09:35:54.705419   49664 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:35:54.705439   49664 out.go:309] Setting ErrFile to fd 2...
I0115 09:35:54.705451   49664 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:35:54.705667   49664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
I0115 09:35:54.706329   49664 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:35:54.706478   49664 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:35:54.706908   49664 cli_runner.go:164] Run: docker container inspect functional-945307 --format={{.State.Status}}
I0115 09:35:54.724421   49664 ssh_runner.go:195] Run: systemctl --version
I0115 09:35:54.724480   49664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-945307
I0115 09:35:54.745245   49664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/functional-945307/id_rsa Username:docker}
I0115 09:35:54.845447   49664 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945307 image ls --format json --alsologtostderr:
[{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d
4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
"gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/paus
e@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":["docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b
417eb95212686","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44405005"},{"id":"a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c","docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-945307"],"size":"34114467"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","regist
ry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dc
ddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945307 image ls --format json --alsologtostderr:
I0115 09:35:54.220767   49559 out.go:296] Setting OutFile to fd 1 ...
I0115 09:35:54.220878   49559 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:35:54.220887   49559 out.go:309] Setting ErrFile to fd 2...
I0115 09:35:54.220892   49559 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:35:54.221120   49559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
I0115 09:35:54.221715   49559 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:35:54.221809   49559 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:35:54.222205   49559 cli_runner.go:164] Run: docker container inspect functional-945307 --format={{.State.Status}}
I0115 09:35:54.242371   49559 ssh_runner.go:195] Run: systemctl --version
I0115 09:35:54.242427   49559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-945307
I0115 09:35:54.270799   49559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/functional-945307/id_rsa Username:docker}
I0115 09:35:54.529778   49559 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945307 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests:
- docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "44405005"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-945307
size: "34114467"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945307 image ls --format yaml --alsologtostderr:
I0115 09:35:52.721910   49316 out.go:296] Setting OutFile to fd 1 ...
I0115 09:35:52.722063   49316 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:35:52.722073   49316 out.go:309] Setting ErrFile to fd 2...
I0115 09:35:52.722078   49316 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:35:52.722297   49316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
I0115 09:35:52.722954   49316 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:35:52.723048   49316 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:35:52.723472   49316 cli_runner.go:164] Run: docker container inspect functional-945307 --format={{.State.Status}}
I0115 09:35:52.743400   49316 ssh_runner.go:195] Run: systemctl --version
I0115 09:35:52.743459   49316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-945307
I0115 09:35:52.766955   49316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/functional-945307/id_rsa Username:docker}
I0115 09:35:52.929992   49316 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945307 ssh pgrep buildkitd: exit status 1 (287.806405ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image build -t localhost/my-image:functional-945307 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-945307 image build -t localhost/my-image:functional-945307 testdata/build --alsologtostderr: (6.474088901s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945307 image build -t localhost/my-image:functional-945307 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6178e3d104e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-945307
--> 31974e649fb
Successfully tagged localhost/my-image:functional-945307
31974e649fb8ad8200745c404e426215cd25b3229940cdbd08b543a691debc0b
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945307 image build -t localhost/my-image:functional-945307 testdata/build --alsologtostderr:
I0115 09:35:53.396207   49436 out.go:296] Setting OutFile to fd 1 ...
I0115 09:35:53.396362   49436 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:35:53.396369   49436 out.go:309] Setting ErrFile to fd 2...
I0115 09:35:53.396373   49436 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 09:35:53.396577   49436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
I0115 09:35:53.397207   49436 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:35:53.397741   49436 config.go:182] Loaded profile config "functional-945307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 09:35:53.398229   49436 cli_runner.go:164] Run: docker container inspect functional-945307 --format={{.State.Status}}
I0115 09:35:53.414587   49436 ssh_runner.go:195] Run: systemctl --version
I0115 09:35:53.414651   49436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-945307
I0115 09:35:53.434359   49436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/functional-945307/id_rsa Username:docker}
I0115 09:35:53.573973   49436 build_images.go:151] Building image from path: /tmp/build.2632466274.tar
I0115 09:35:53.574057   49436 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0115 09:35:53.632577   49436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2632466274.tar
I0115 09:35:53.636274   49436 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2632466274.tar: stat -c "%s %y" /var/lib/minikube/build/build.2632466274.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2632466274.tar': No such file or directory
I0115 09:35:53.636303   49436 ssh_runner.go:362] scp /tmp/build.2632466274.tar --> /var/lib/minikube/build/build.2632466274.tar (3072 bytes)
I0115 09:35:53.661547   49436 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2632466274
I0115 09:35:53.732491   49436 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2632466274 -xf /var/lib/minikube/build/build.2632466274.tar
I0115 09:35:53.743520   49436 crio.go:297] Building image: /var/lib/minikube/build/build.2632466274
I0115 09:35:53.743591   49436 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-945307 /var/lib/minikube/build/build.2632466274 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0115 09:35:59.782015   49436 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-945307 /var/lib/minikube/build/build.2632466274 --cgroup-manager=cgroupfs: (6.038368372s)
I0115 09:35:59.782087   49436 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2632466274
I0115 09:35:59.790710   49436 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2632466274.tar
I0115 09:35:59.798444   49436 build_images.go:207] Built localhost/my-image:functional-945307 from /tmp/build.2632466274.tar
I0115 09:35:59.798483   49436 build_images.go:123] succeeded building to: functional-945307
I0115 09:35:59.798487   49436 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.117434444s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-945307
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image load --daemon gcr.io/google-containers/addon-resizer:functional-945307 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-945307 image load --daemon gcr.io/google-containers/addon-resizer:functional-945307 --alsologtostderr: (4.15277308s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdany-port1874431371/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705311327996103441" to /tmp/TestFunctionalparallelMountCmdany-port1874431371/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705311327996103441" to /tmp/TestFunctionalparallelMountCmdany-port1874431371/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705311327996103441" to /tmp/TestFunctionalparallelMountCmdany-port1874431371/001/test-1705311327996103441
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (415.433414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 15 09:35 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 15 09:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 15 09:35 test-1705311327996103441
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh cat /mount-9p/test-1705311327996103441
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-945307 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e9e55409-5d7e-45e5-89a8-b36c1714dc4b] Pending
helpers_test.go:344: "busybox-mount" [e9e55409-5d7e-45e5-89a8-b36c1714dc4b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e9e55409-5d7e-45e5-89a8-b36c1714dc4b] Running
helpers_test.go:344: "busybox-mount" [e9e55409-5d7e-45e5-89a8-b36c1714dc4b] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e9e55409-5d7e-45e5-89a8-b36c1714dc4b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003893183s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-945307 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdany-port1874431371/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "340.416219ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "68.853462ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "323.901211ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "78.468271ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image load --daemon gcr.io/google-containers/addon-resizer:functional-945307 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-945307 image load --daemon gcr.io/google-containers/addon-resizer:functional-945307 --alsologtostderr: (2.700932965s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-945307
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image load --daemon gcr.io/google-containers/addon-resizer:functional-945307 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-945307 image load --daemon gcr.io/google-containers/addon-resizer:functional-945307 --alsologtostderr: (5.31996815s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-945307 image ls: (1.294459285s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdspecific-port1844867502/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (356.402203ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdspecific-port1844867502/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945307 ssh "sudo umount -f /mount-9p": exit status 1 (491.778771ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-945307 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdspecific-port1844867502/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 service list -o json
functional_test.go:1493: Took "514.07653ms" to run "out/minikube-linux-amd64 -p functional-945307 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31849
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup99208059/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup99208059/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup99208059/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T" /mount1: exit status 1 (504.409398ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-945307 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup99208059/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup99208059/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup99208059/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31849
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-945307 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-945307 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-945307 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-945307 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 46318: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-945307 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-945307 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5c0826f4-f365-4812-8398-22f8837766c0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5c0826f4-f365-4812-8398-22f8837766c0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00407238s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image save gcr.io/google-containers/addon-resizer:functional-945307 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image rm gcr.io/google-containers/addon-resizer:functional-945307 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-945307 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.371164118s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-945307
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 image save --daemon gcr.io/google-containers/addon-resizer:functional-945307 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-945307 image save --daemon gcr.io/google-containers/addon-resizer:functional-945307 --alsologtostderr: (2.345392685s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-945307
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-945307 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.187.233 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-945307 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-945307 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-945307
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-945307
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-945307
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (65.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-865640 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0115 09:36:49.169029   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-865640 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m5.212333379s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (65.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-865640 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-865640 addons enable ingress --alsologtostderr -v=5: (8.210079829s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-865640 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-964128 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0115 09:40:28.421461   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
E0115 09:40:30.982356   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
E0115 09:40:36.103350   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
E0115 09:40:46.343870   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
E0115 09:41:06.824821   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-964128 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m6.389344404s)
--- PASS: TestJSONOutput/start/Command (66.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-964128 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-964128 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-964128 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-964128 --output=json --user=testUser: (5.76248177s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-828073 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-828073 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.631927ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4123d3b0-eebc-421f-8bee-fed63ea09c88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-828073] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b49db17-e147-45ad-950c-dab0b8bface7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17953"}}
	{"specversion":"1.0","id":"13f93eb1-b2a8-46fd-91d5-84cd70312546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fa46e1bd-4e5d-4f93-8138-ae785f756fe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig"}}
	{"specversion":"1.0","id":"4bf65cb2-9fc0-44a5-8bc3-9548c527a518","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube"}}
	{"specversion":"1.0","id":"2c637165-daf0-41e9-83dc-7f3f28699b0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2e4b987f-3a45-4a02-9ed1-e1b28dacfb70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0d144fbb-ed27-43e1-b1c1-08fed13d2157","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-828073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-828073
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.97s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-304723 --network=
E0115 09:41:47.785795   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-304723 --network=: (26.861748156s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-304723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-304723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-304723: (2.087608811s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.97s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-837705 --network=bridge
E0115 09:42:29.333307   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:29.338626   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:29.348903   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:29.369247   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:29.409582   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:29.489918   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:29.650520   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:29.971122   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:30.612103   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:31.893249   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:34.453479   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:42:39.573892   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-837705 --network=bridge: (24.80965574s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-837705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-837705
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-837705: (1.921347441s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.75s)

                                                
                                    
x
+
TestKicExistingNetwork (24.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-105515 --network=existing-network
E0115 09:42:49.814876   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-105515 --network=existing-network: (22.46108109s)
helpers_test.go:175: Cleaning up "existing-network-105515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-105515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-105515: (1.852174966s)
--- PASS: TestKicExistingNetwork (24.45s)

                                                
                                    
x
+
TestKicCustomSubnet (27.78s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-293008 --subnet=192.168.60.0/24
E0115 09:43:09.706355   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
E0115 09:43:10.295086   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-293008 --subnet=192.168.60.0/24: (25.691360051s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-293008 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-293008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-293008
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-293008: (2.073431465s)
--- PASS: TestKicCustomSubnet (27.78s)

                                                
                                    
x
+
TestKicStaticIP (28.26s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-371669 --static-ip=192.168.200.200
E0115 09:43:51.256696   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-371669 --static-ip=192.168.200.200: (26.03837025s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-371669 ip
helpers_test.go:175: Cleaning up "static-ip-371669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-371669
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-371669: (2.08589776s)
--- PASS: TestKicStaticIP (28.26s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (46.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-147891 --driver=docker  --container-runtime=crio
E0115 09:44:05.325807   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-147891 --driver=docker  --container-runtime=crio: (21.062788707s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-150914 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-150914 --driver=docker  --container-runtime=crio: (20.702784845s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-147891
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-150914
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-150914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-150914
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-150914: (1.851453917s)
helpers_test.go:175: Cleaning up "first-147891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-147891
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-147891: (2.202531217s)
--- PASS: TestMinikubeProfile (46.86s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-470166 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-470166 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.247804634s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-470166 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-483882 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-483882 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.277420109s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-483882 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-470166 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-470166 --alsologtostderr -v=5: (1.633995787s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-483882 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-483882
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-483882: (1.18729781s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-483882
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-483882: (5.987883206s)
E0115 09:45:13.177755   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (6.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-483882 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (57.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-218062 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0115 09:45:25.862879   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
E0115 09:45:53.547036   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-218062 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (57.059826713s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (57.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-218062 -- rollout status deployment/busybox: (1.655794796s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-cplh9 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-djgvv -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-cplh9 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-djgvv -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-cplh9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-218062 -- exec busybox-5bc68d56bd-djgvv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.30s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-218062 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-218062 -v 3 --alsologtostderr: (18.41237122s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-218062 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp testdata/cp-test.txt multinode-218062:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp multinode-218062:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3419661503/001/cp-test_multinode-218062.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp multinode-218062:/home/docker/cp-test.txt multinode-218062-m02:/home/docker/cp-test_multinode-218062_multinode-218062-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m02 "sudo cat /home/docker/cp-test_multinode-218062_multinode-218062-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp multinode-218062:/home/docker/cp-test.txt multinode-218062-m03:/home/docker/cp-test_multinode-218062_multinode-218062-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m03 "sudo cat /home/docker/cp-test_multinode-218062_multinode-218062-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp testdata/cp-test.txt multinode-218062-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp multinode-218062-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3419661503/001/cp-test_multinode-218062-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp multinode-218062-m02:/home/docker/cp-test.txt multinode-218062:/home/docker/cp-test_multinode-218062-m02_multinode-218062.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062 "sudo cat /home/docker/cp-test_multinode-218062-m02_multinode-218062.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp multinode-218062-m02:/home/docker/cp-test.txt multinode-218062-m03:/home/docker/cp-test_multinode-218062-m02_multinode-218062-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m03 "sudo cat /home/docker/cp-test_multinode-218062-m02_multinode-218062-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp testdata/cp-test.txt multinode-218062-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp multinode-218062-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3419661503/001/cp-test_multinode-218062-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp multinode-218062-m03:/home/docker/cp-test.txt multinode-218062:/home/docker/cp-test_multinode-218062-m03_multinode-218062.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062 "sudo cat /home/docker/cp-test_multinode-218062-m03_multinode-218062.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 cp multinode-218062-m03:/home/docker/cp-test.txt multinode-218062-m02:/home/docker/cp-test_multinode-218062-m03_multinode-218062-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 ssh -n multinode-218062-m02 "sudo cat /home/docker/cp-test_multinode-218062-m03_multinode-218062-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-218062 node stop m03: (1.188724991s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-218062 status: exit status 7 (468.006519ms)

                                                
                                                
-- stdout --
	multinode-218062
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-218062-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-218062-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-218062 status --alsologtostderr: exit status 7 (477.499347ms)

                                                
                                                
-- stdout --
	multinode-218062
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-218062-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-218062-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 09:46:50.075764  107556 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:46:50.075922  107556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:46:50.075937  107556 out.go:309] Setting ErrFile to fd 2...
	I0115 09:46:50.075947  107556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:46:50.076152  107556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	I0115 09:46:50.076343  107556 out.go:303] Setting JSON to false
	I0115 09:46:50.076383  107556 mustload.go:65] Loading cluster: multinode-218062
	I0115 09:46:50.076424  107556 notify.go:220] Checking for updates...
	I0115 09:46:50.076967  107556 config.go:182] Loaded profile config "multinode-218062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:46:50.076991  107556 status.go:255] checking status of multinode-218062 ...
	I0115 09:46:50.077501  107556 cli_runner.go:164] Run: docker container inspect multinode-218062 --format={{.State.Status}}
	I0115 09:46:50.096757  107556 status.go:330] multinode-218062 host status = "Running" (err=<nil>)
	I0115 09:46:50.096788  107556 host.go:66] Checking if "multinode-218062" exists ...
	I0115 09:46:50.097149  107556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-218062
	I0115 09:46:50.114082  107556 host.go:66] Checking if "multinode-218062" exists ...
	I0115 09:46:50.114364  107556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 09:46:50.114413  107556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062
	I0115 09:46:50.131536  107556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062/id_rsa Username:docker}
	I0115 09:46:50.222130  107556 ssh_runner.go:195] Run: systemctl --version
	I0115 09:46:50.226076  107556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:46:50.236274  107556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:46:50.289931  107556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-15 09:46:50.280123373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:46:50.290462  107556 kubeconfig.go:92] found "multinode-218062" server: "https://192.168.58.2:8443"
	I0115 09:46:50.290483  107556 api_server.go:166] Checking apiserver status ...
	I0115 09:46:50.290514  107556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 09:46:50.301515  107556 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
	I0115 09:46:50.310526  107556 api_server.go:182] apiserver freezer: "12:freezer:/docker/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/crio/crio-c574295e958126a1062510aeae9fccd3073ee7a3c125e57dd5002bd15d86a176"
	I0115 09:46:50.310587  107556 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/895276697ddf292070b37b36ad96b7f2291cd57ef760db46eff306facb766d84/crio/crio-c574295e958126a1062510aeae9fccd3073ee7a3c125e57dd5002bd15d86a176/freezer.state
	I0115 09:46:50.319348  107556 api_server.go:204] freezer state: "THAWED"
	I0115 09:46:50.319377  107556 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0115 09:46:50.323560  107556 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0115 09:46:50.323589  107556 status.go:421] multinode-218062 apiserver status = Running (err=<nil>)
	I0115 09:46:50.323601  107556 status.go:257] multinode-218062 status: &{Name:multinode-218062 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 09:46:50.323623  107556 status.go:255] checking status of multinode-218062-m02 ...
	I0115 09:46:50.323932  107556 cli_runner.go:164] Run: docker container inspect multinode-218062-m02 --format={{.State.Status}}
	I0115 09:46:50.341320  107556 status.go:330] multinode-218062-m02 host status = "Running" (err=<nil>)
	I0115 09:46:50.341348  107556 host.go:66] Checking if "multinode-218062-m02" exists ...
	I0115 09:46:50.341603  107556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-218062-m02
	I0115 09:46:50.358508  107556 host.go:66] Checking if "multinode-218062-m02" exists ...
	I0115 09:46:50.358765  107556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 09:46:50.358804  107556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-218062-m02
	I0115 09:46:50.375289  107556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17953-3696/.minikube/machines/multinode-218062-m02/id_rsa Username:docker}
	I0115 09:46:50.466049  107556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 09:46:50.476167  107556 status.go:257] multinode-218062-m02 status: &{Name:multinode-218062-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0115 09:46:50.476196  107556 status.go:255] checking status of multinode-218062-m03 ...
	I0115 09:46:50.476453  107556 cli_runner.go:164] Run: docker container inspect multinode-218062-m03 --format={{.State.Status}}
	I0115 09:46:50.492693  107556 status.go:330] multinode-218062-m03 host status = "Stopped" (err=<nil>)
	I0115 09:46:50.492714  107556 status.go:343] host is not running, skipping remaining checks
	I0115 09:46:50.492725  107556 status.go:257] multinode-218062-m03 status: &{Name:multinode-218062-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-218062 node start m03 --alsologtostderr: (10.172226859s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (109.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-218062
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-218062
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-218062: (24.620265725s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-218062 --wait=true -v=8 --alsologtostderr
E0115 09:47:29.334034   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
E0115 09:47:57.018580   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-218062 --wait=true -v=8 --alsologtostderr: (1m24.905901458s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-218062
--- PASS: TestMultiNode/serial/RestartKeepsNodes (109.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-218062 node delete m03: (4.096036778s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 stop
E0115 09:49:05.325774   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-218062 stop: (23.510504721s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-218062 status: exit status 7 (99.423663ms)

                                                
                                                
-- stdout --
	multinode-218062
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-218062-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-218062 status --alsologtostderr: exit status 7 (91.926281ms)

                                                
                                                
-- stdout --
	multinode-218062
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-218062-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 09:49:19.391594  117294 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:49:19.391850  117294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:49:19.391858  117294 out.go:309] Setting ErrFile to fd 2...
	I0115 09:49:19.391862  117294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:49:19.392057  117294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	I0115 09:49:19.392232  117294 out.go:303] Setting JSON to false
	I0115 09:49:19.392262  117294 mustload.go:65] Loading cluster: multinode-218062
	I0115 09:49:19.392350  117294 notify.go:220] Checking for updates...
	I0115 09:49:19.392665  117294 config.go:182] Loaded profile config "multinode-218062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:49:19.392678  117294 status.go:255] checking status of multinode-218062 ...
	I0115 09:49:19.393078  117294 cli_runner.go:164] Run: docker container inspect multinode-218062 --format={{.State.Status}}
	I0115 09:49:19.409615  117294 status.go:330] multinode-218062 host status = "Stopped" (err=<nil>)
	I0115 09:49:19.409637  117294 status.go:343] host is not running, skipping remaining checks
	I0115 09:49:19.409643  117294 status.go:257] multinode-218062 status: &{Name:multinode-218062 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 09:49:19.409669  117294 status.go:255] checking status of multinode-218062-m02 ...
	I0115 09:49:19.409937  117294 cli_runner.go:164] Run: docker container inspect multinode-218062-m02 --format={{.State.Status}}
	I0115 09:49:19.426622  117294 status.go:330] multinode-218062-m02 host status = "Stopped" (err=<nil>)
	I0115 09:49:19.426644  117294 status.go:343] host is not running, skipping remaining checks
	I0115 09:49:19.426651  117294 status.go:257] multinode-218062-m02 status: &{Name:multinode-218062-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (73.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-218062 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0115 09:50:25.863066   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
E0115 09:50:28.370444   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-218062 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m12.744073905s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-218062 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (73.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-218062
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-218062-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-218062-m02 --driver=docker  --container-runtime=crio: exit status 14 (83.630739ms)

                                                
                                                
-- stdout --
	* [multinode-218062-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-218062-m02' is duplicated with machine name 'multinode-218062-m02' in profile 'multinode-218062'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-218062-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-218062-m03 --driver=docker  --container-runtime=crio: (24.447161977s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-218062
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-218062: exit status 80 (288.27555ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-218062
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-218062-m03 already exists in multinode-218062-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-218062-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-218062-m03: (1.847082297s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.73s)

                                                
                                    
x
+
TestPreload (140.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-205380 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-205380 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m4.864624384s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-205380 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-205380
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-205380: (5.71317751s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-205380 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0115 09:52:29.332983   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-205380 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m6.782024625s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-205380 image list
helpers_test.go:175: Cleaning up "test-preload-205380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-205380
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-205380: (2.240905586s)
--- PASS: TestPreload (140.82s)

                                                
                                    
x
+
TestScheduledStopUnix (97.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-188951 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-188951 --memory=2048 --driver=docker  --container-runtime=crio: (21.261901396s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-188951 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-188951 -n scheduled-stop-188951
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-188951 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-188951 --cancel-scheduled
E0115 09:54:05.325333   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-188951 -n scheduled-stop-188951
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-188951
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-188951 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-188951
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-188951: exit status 7 (79.04ms)

                                                
                                                
-- stdout --
	scheduled-stop-188951
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-188951 -n scheduled-stop-188951
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-188951 -n scheduled-stop-188951: exit status 7 (76.64392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-188951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-188951
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-188951: (5.12294102s)
--- PASS: TestScheduledStopUnix (97.85s)

                                                
                                    
x
+
TestInsufficientStorage (10.32s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-350702 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-350702 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.906513845s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9fa24b02-7480-4516-9b49-9527f131008e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-350702] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1601735d-2bfa-4940-a314-d5d4efba0923","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17953"}}
	{"specversion":"1.0","id":"f4cee586-c049-4a4c-9fc2-efd1c442dadd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"265f7c47-a3c8-4039-a28c-f7ad3e1701dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig"}}
	{"specversion":"1.0","id":"22586fbf-9fdb-420f-a675-8819e48c2888","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube"}}
	{"specversion":"1.0","id":"2ce5547e-e0a7-41b9-b44c-0c6a00c01fed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"20812132-8df2-4f07-94f3-0e86361017d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"08ecca61-3d72-410f-b84e-d2fd1a6bee1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"25782658-a768-4ec0-a4bb-2fe690a16bda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ebe75b76-7307-4c3a-8d3c-271e88203944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c83b503-d8e1-4d0d-ad78-1221324f57dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ebc8bb0d-7147-4d07-81a8-e88b92ae1413","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-350702 in cluster insufficient-storage-350702","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4d5ac9b-6fd7-4842-b3bf-4d03e24e6f27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3bb6597b-a0aa-4689-b1db-6c450c536d82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"be4b6099-2945-494b-8ce8-fb4c1ab62e5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-350702 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-350702 --output=json --layout=cluster: exit status 7 (280.451951ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-350702","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-350702","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 09:55:11.978641  137550 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-350702" does not appear in /home/jenkins/minikube-integration/17953-3696/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-350702 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-350702 --output=json --layout=cluster: exit status 7 (282.125129ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-350702","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-350702","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 09:55:12.262377  137639 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-350702" does not appear in /home/jenkins/minikube-integration/17953-3696/kubeconfig
	E0115 09:55:12.272292  137639 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/insufficient-storage-350702/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-350702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-350702
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-350702: (1.845908681s)
--- PASS: TestInsufficientStorage (10.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2230313956 start -p running-upgrade-689019 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2230313956 start -p running-upgrade-689019 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.582459353s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-689019 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-689019 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.939275286s)
helpers_test.go:175: Cleaning up "running-upgrade-689019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-689019
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-689019: (2.757978536s)
--- PASS: TestRunningBinaryUpgrade (61.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-946105 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-946105 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (53.308207503s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-946105
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-946105: (1.220418051s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-946105 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-946105 status --format={{.Host}}: exit status 7 (80.48851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-946105 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-946105 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.87558799s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-946105 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-946105 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-946105 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (84.37265ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-946105] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-946105
	    minikube start -p kubernetes-upgrade-946105 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9461052 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-946105 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-946105 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-946105 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.611507066s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-946105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-946105
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-946105: (2.125802401s)
--- PASS: TestKubernetesUpgrade (347.37s)

                                                
                                    
x
+
TestMissingContainerUpgrade (133.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3062712054 start -p missing-upgrade-649491 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3062712054 start -p missing-upgrade-649491 --memory=2200 --driver=docker  --container-runtime=crio: (1m3.674777661s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-649491
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-649491: (12.879590026s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-649491
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-649491 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-649491 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.876661791s)
helpers_test.go:175: Cleaning up "missing-upgrade-649491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-649491
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-649491: (2.013745753s)
--- PASS: TestMissingContainerUpgrade (133.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620214 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-620214 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (103.637958ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-620214] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620214 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620214 --driver=docker  --container-runtime=crio: (35.534154775s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-620214 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (89.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3104049667 start -p stopped-upgrade-647759 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0115 09:55:25.862395   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3104049667 start -p stopped-upgrade-647759 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m4.069594461s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3104049667 -p stopped-upgrade-647759 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3104049667 -p stopped-upgrade-647759 stop: (2.170554679s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-647759 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-647759 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.933770683s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (89.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620214 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620214 --no-kubernetes --driver=docker  --container-runtime=crio: (9.092073637s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-620214 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-620214 status -o json: exit status 2 (290.658618ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-620214","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-620214
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-620214: (1.943652088s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620214 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620214 --no-kubernetes --driver=docker  --container-runtime=crio: (7.680989327s)
--- PASS: TestNoKubernetes/serial/Start (7.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-620214 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-620214 "sudo systemctl is-active --quiet service kubelet": exit status 1 (368.86297ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (5.577033636s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (6.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-620214
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-620214: (1.251012498s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620214 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620214 --driver=docker  --container-runtime=crio: (6.146147164s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-620214 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-620214 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.129751ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/Start (75.17s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-167587 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-167587 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m15.16982764s)
--- PASS: TestPause/serial/Start (75.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-647759
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-011893 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-011893 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (284.263289ms)

                                                
                                                
-- stdout --
	* [false-011893] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 09:57:30.283902  170826 out.go:296] Setting OutFile to fd 1 ...
	I0115 09:57:30.284061  170826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:57:30.284067  170826 out.go:309] Setting ErrFile to fd 2...
	I0115 09:57:30.284074  170826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 09:57:30.284356  170826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-3696/.minikube/bin
	I0115 09:57:30.285168  170826 out.go:303] Setting JSON to false
	I0115 09:57:30.286387  170826 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2400,"bootTime":1705310250,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 09:57:30.286500  170826 start.go:138] virtualization: kvm guest
	I0115 09:57:30.317831  170826 out.go:177] * [false-011893] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 09:57:30.337387  170826 notify.go:220] Checking for updates...
	I0115 09:57:30.340390  170826 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 09:57:30.342906  170826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 09:57:30.348405  170826 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-3696/kubeconfig
	I0115 09:57:30.349997  170826 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-3696/.minikube
	I0115 09:57:30.351763  170826 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 09:57:30.354047  170826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 09:57:30.356141  170826 config.go:182] Loaded profile config "cert-expiration-263869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:57:30.356249  170826 config.go:182] Loaded profile config "kubernetes-upgrade-946105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0115 09:57:30.356328  170826 config.go:182] Loaded profile config "pause-167587": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 09:57:30.356429  170826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 09:57:30.382106  170826 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 09:57:30.382240  170826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 09:57:30.470167  170826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:76 SystemTime:2024-01-15 09:57:30.459419357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 09:57:30.470304  170826 docker.go:295] overlay module found
	I0115 09:57:30.474354  170826 out.go:177] * Using the docker driver based on user configuration
	I0115 09:57:30.476169  170826 start.go:298] selected driver: docker
	I0115 09:57:30.476191  170826 start.go:902] validating driver "docker" against <nil>
	I0115 09:57:30.476206  170826 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 09:57:30.479930  170826 out.go:177] 
	W0115 09:57:30.481866  170826 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0115 09:57:30.483560  170826 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-011893 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-011893" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 09:57:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-167587
contexts:
- context:
cluster: pause-167587
extensions:
- extension:
last-update: Mon, 15 Jan 2024 09:57:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-167587
name: pause-167587
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-167587
user:
client-certificate: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/pause-167587/client.crt
client-key: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/pause-167587/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-011893

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-011893"

                                                
                                                
----------------------- debugLogs end: false-011893 [took: 6.823040709s] --------------------------------
helpers_test.go:175: Cleaning up "false-011893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-011893
--- PASS: TestNetworkPlugins/group/false (7.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-167587 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-167587 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.338600505s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.37s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-167587 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-167587 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-167587 --output=json --layout=cluster: exit status 2 (345.975007ms)

                                                
                                                
-- stdout --
	{"Name":"pause-167587","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-167587","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-167587 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-167587 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-167587 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-167587 --alsologtostderr -v=5: (3.22914332s)
--- PASS: TestPause/serial/DeletePaused (3.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (116.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-777386 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-777386 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m56.115053113s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (116.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.18s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0115 09:58:52.379016   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.122270561s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-167587
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-167587: exit status 1 (16.792392ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-167587: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-204032 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0115 09:59:05.326006   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-204032 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m10.497736092s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-204032 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9453661d-f1b2-4189-a0a0-2c8cf4961a20] Pending
helpers_test.go:344: "busybox" [9453661d-f1b2-4189-a0a0-2c8cf4961a20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9453661d-f1b2-4189-a0a0-2c8cf4961a20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003910679s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-204032 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-204032 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-204032 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-204032 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-204032 --alsologtostderr -v=3: (11.887127932s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-204032 -n embed-certs-204032
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-204032 -n embed-certs-204032: exit status 7 (87.935224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-204032 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (333.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-204032 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0115 10:00:25.862719   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-204032 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m32.787191143s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-204032 -n embed-certs-204032
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (333.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-777386 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0fbea81f-a0d9-4e57-9da6-230a3db5661f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0fbea81f-a0d9-4e57-9da6-230a3db5661f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003177057s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-777386 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-777386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-777386 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-777386 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-777386 --alsologtostderr -v=3: (11.832488784s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-777386 -n old-k8s-version-777386
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-777386 -n old-k8s-version-777386: exit status 7 (81.058756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-777386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (444.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-777386 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-777386 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m24.309262606s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-777386 -n old-k8s-version-777386
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (444.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (48.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-274481 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-274481 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (48.675157756s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (48.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-274481 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [42b8d3ec-abe9-4f1c-874d-800a57716f8a] Pending
helpers_test.go:344: "busybox" [42b8d3ec-abe9-4f1c-874d-800a57716f8a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [42b8d3ec-abe9-4f1c-874d-800a57716f8a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.003915765s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-274481 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-274481 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-274481 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-274481 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-274481 --alsologtostderr -v=3: (11.849173838s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-274481 -n no-preload-274481
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-274481 -n no-preload-274481: exit status 7 (84.931776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-274481 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (343.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-274481 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0115 10:02:29.333281   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/ingress-addon-legacy-865640/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-274481 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m42.842881542s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-274481 -n no-preload-274481
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (343.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-134264 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0115 10:04:05.325685   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-134264 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m6.420346291s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-134264 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd297912-3e70-4b27-a87c-13d2cac032c2] Pending
helpers_test.go:344: "busybox" [fd297912-3e70-4b27-a87c-13d2cac032c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd297912-3e70-4b27-a87c-13d2cac032c2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003820258s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-134264 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-134264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-134264 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-134264 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-134264 --alsologtostderr -v=3: (11.853943587s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-134264 -n default-k8s-diff-port-134264
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-134264 -n default-k8s-diff-port-134264: exit status 7 (104.39771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-134264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-134264 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0115 10:05:25.862476   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-134264 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m38.035081601s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-134264 -n default-k8s-diff-port-134264
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bnl4z" [3a70429b-8875-4b10-9b2e-b5340ac3778f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bnl4z" [3a70429b-8875-4b10-9b2e-b5340ac3778f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003507506s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bnl4z" [3a70429b-8875-4b10-9b2e-b5340ac3778f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004012308s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-204032 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-204032 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-204032 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-204032 -n embed-certs-204032
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-204032 -n embed-certs-204032: exit status 2 (316.127276ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-204032 -n embed-certs-204032
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-204032 -n embed-certs-204032: exit status 2 (313.84677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-204032 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-204032 -n embed-certs-204032
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-204032 -n embed-certs-204032
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-306393 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-306393 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (37.035144608s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-306393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-306393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11021153s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-306393 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-306393 --alsologtostderr -v=3: (1.230035738s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-306393 -n newest-cni-306393
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-306393 -n newest-cni-306393: exit status 7 (97.214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-306393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-306393 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0115 10:07:08.371324   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/addons-154292/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-306393 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (26.577149664s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-306393 -n newest-cni-306393
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-306393 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-306393 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-306393 -n newest-cni-306393
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-306393 -n newest-cni-306393: exit status 2 (316.779499ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-306393 -n newest-cni-306393
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-306393 -n newest-cni-306393: exit status 2 (311.813549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-306393 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-306393 -n newest-cni-306393
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-306393 -n newest-cni-306393
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (68.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m8.891187538s)
--- PASS: TestNetworkPlugins/group/auto/Start (68.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-68w2z" [f891d10c-cae5-4760-a294-e9fcd95ff62b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-68w2z" [f891d10c-cae5-4760-a294-e9fcd95ff62b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.004573369s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-hvjtb" [849a860f-b178-420e-b7ea-4e3e243389cf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003547766s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-68w2z" [f891d10c-cae5-4760-a294-e9fcd95ff62b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004201416s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-274481 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-274481 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-274481 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-274481 -n no-preload-274481
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-274481 -n no-preload-274481: exit status 2 (314.784926ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-274481 -n no-preload-274481
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-274481 -n no-preload-274481: exit status 2 (309.8262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-274481 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-274481 -n no-preload-274481
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-274481 -n no-preload-274481
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-hvjtb" [849a860f-b178-420e-b7ea-4e3e243389cf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003423247s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-777386 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m10.786574457s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-777386 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-777386 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-777386 -n old-k8s-version-777386
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-777386 -n old-k8s-version-777386: exit status 2 (343.591037ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-777386 -n old-k8s-version-777386
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-777386 -n old-k8s-version-777386: exit status 2 (333.826932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-777386 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-777386 -n old-k8s-version-777386
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-777386 -n old-k8s-version-777386
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.931531625s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-011893 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-011893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vfk8t" [520e7da4-c405-4208-aa31-8b9cdeab39bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vfk8t" [520e7da4-c405-4208-aa31-8b9cdeab39bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004851061s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-011893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.570745427s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zrslx" [a8cf6b34-3f14-4765-a2f4-10589fc894d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005605223s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-c7qdq" [33efc5b2-dfaa-4321-b986-efe1004b2723] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005096704s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-011893 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-011893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zggs7" [41d0011e-28ca-4cdb-85f9-32ce2c092ead] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zggs7" [41d0011e-28ca-4cdb-85f9-32ce2c092ead] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004234018s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-011893 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-011893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-krlt9" [72ff5056-00da-49fd-8b51-59bfa8c72603] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-krlt9" [72ff5056-00da-49fd-8b51-59bfa8c72603] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003484599s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-011893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-011893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-011893 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-011893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kxxnw" [111aa108-a14a-40e9-a4d2-5af3874fbe46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kxxnw" [111aa108-a14a-40e9-a4d2-5af3874fbe46] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005200085s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m26.75935393s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-011893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m6.452457695s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pwldj" [2afd600f-53ba-48f4-b510-52bac1262771] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0115 10:10:25.863155   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/functional-945307/client.crt: no such file or directory
E0115 10:10:33.380423   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
E0115 10:10:33.385862   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
E0115 10:10:33.396210   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
E0115 10:10:33.416534   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
E0115 10:10:33.456819   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
E0115 10:10:33.537811   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
E0115 10:10:33.698321   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pwldj" [2afd600f-53ba-48f4-b510-52bac1262771] Running
E0115 10:10:34.018444   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
E0115 10:10:34.659413   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
E0115 10:10:35.940230   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.004296507s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pwldj" [2afd600f-53ba-48f4-b510-52bac1262771] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004114785s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-134264 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0115 10:10:43.622337   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-011893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (38.344375635s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-134264 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-134264 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-134264 --alsologtostderr -v=1: (1.103159461s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134264 -n default-k8s-diff-port-134264
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134264 -n default-k8s-diff-port-134264: exit status 2 (314.977108ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134264 -n default-k8s-diff-port-134264
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134264 -n default-k8s-diff-port-134264: exit status 2 (351.154351ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-134264 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134264 -n default-k8s-diff-port-134264
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134264 -n default-k8s-diff-port-134264
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.48s)
E0115 10:10:53.862718   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
E0115 10:11:14.343454   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-011893 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-011893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hp2k6" [00514f33-385a-4fad-a510-3502f9f4e09c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hp2k6" [00514f33-385a-4fad-a510-3502f9f4e09c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003951427s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rh8kr" [52269b72-7cdd-40e8-a652-900fb83b480c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004280993s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-011893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-011893 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-011893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b49zq" [5aa1456b-acef-4394-828d-6877afcfa877] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b49zq" [5aa1456b-acef-4394-828d-6877afcfa877] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00394408s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-011893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-011893 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-011893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dtrws" [9e680024-aa7e-451c-ab01-50991af27ebe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dtrws" [9e680024-aa7e-451c-ab01-50991af27ebe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003175784s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-011893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-011893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0115 10:11:55.304207   11825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/old-k8s-version-777386/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    

Test skip (27/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-186449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-186449
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-011893 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-011893" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 09:57:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-167587
contexts:
- context:
cluster: pause-167587
extensions:
- extension:
last-update: Mon, 15 Jan 2024 09:57:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-167587
name: pause-167587
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-167587
user:
client-certificate: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/pause-167587/client.crt
client-key: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/pause-167587/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-011893

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-011893"

                                                
                                                
----------------------- debugLogs end: kubenet-011893 [took: 4.001655873s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-011893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-011893
--- SKIP: TestNetworkPlugins/group/kubenet (4.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-011893 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-011893" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17953-3696/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 09:57:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-167587
contexts:
- context:
cluster: pause-167587
extensions:
- extension:
last-update: Mon, 15 Jan 2024 09:57:21 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-167587
name: pause-167587
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-167587
user:
client-certificate: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/pause-167587/client.crt
client-key: /home/jenkins/minikube-integration/17953-3696/.minikube/profiles/pause-167587/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-011893

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-011893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-011893"

                                                
                                                
----------------------- debugLogs end: cilium-011893 [took: 4.032470329s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-011893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-011893
--- SKIP: TestNetworkPlugins/group/cilium (4.21s)

                                                
                                    
Copied to clipboard